00:00:00.001 Started by upstream project "autotest-per-patch" build number 131310 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.020 The recommended git tool is: git 00:00:00.020 using credential 00000000-0000-0000-0000-000000000002 00:00:00.023 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.042 Fetching changes from the remote Git repository 00:00:00.044 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.079 Using shallow fetch with depth 1 00:00:00.079 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.079 > git --version # timeout=10 00:00:00.136 > git --version # 'git version 2.39.2' 00:00:00.136 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.227 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.227 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:02:12.652 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:02:12.665 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:02:12.678 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:02:12.678 > git config core.sparsecheckout # timeout=10 00:02:12.689 > git read-tree -mu HEAD # timeout=10 00:02:12.709 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:02:12.727 Commit message: "packer: Fix typo in a package name" 00:02:12.727 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:02:12.813 [Pipeline] Start of Pipeline 00:02:12.826 [Pipeline] library 00:02:12.828 Loading library shm_lib@master 00:02:12.828 Library shm_lib@master is cached. Copying from home. 00:02:12.843 [Pipeline] node 00:02:12.850 Running on VM-host-SM0 in /var/jenkins/workspace/raid-vg-autotest 00:02:12.852 [Pipeline] { 00:02:12.858 [Pipeline] catchError 00:02:12.859 [Pipeline] { 00:02:12.871 [Pipeline] wrap 00:02:12.879 [Pipeline] { 00:02:12.888 [Pipeline] stage 00:02:12.890 [Pipeline] { (Prologue) 00:02:12.905 [Pipeline] echo 00:02:12.906 Node: VM-host-SM0 00:02:12.911 [Pipeline] cleanWs 00:02:12.919 [WS-CLEANUP] Deleting project workspace... 00:02:12.919 [WS-CLEANUP] Deferred wipeout is used... 00:02:12.925 [WS-CLEANUP] done 00:02:13.116 [Pipeline] setCustomBuildProperty 00:02:13.213 [Pipeline] httpRequest 00:02:13.610 [Pipeline] echo 00:02:13.612 Sorcerer 10.211.164.101 is alive 00:02:13.623 [Pipeline] retry 00:02:13.625 [Pipeline] { 00:02:13.642 [Pipeline] httpRequest 00:02:13.646 HttpMethod: GET 00:02:13.647 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:02:13.647 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:02:13.654 Response Code: HTTP/1.1 200 OK 00:02:13.654 Success: Status code 200 is in the accepted range: 200,404 00:02:13.655 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:02:14.244 [Pipeline] } 00:02:14.262 [Pipeline] // retry 00:02:14.270 [Pipeline] sh 00:02:14.596 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:02:14.611 [Pipeline] httpRequest 00:02:15.009 [Pipeline] echo 00:02:15.010 Sorcerer 10.211.164.101 is alive 00:02:15.020 [Pipeline] retry 00:02:15.022 [Pipeline] { 00:02:15.036 [Pipeline] httpRequest 00:02:15.041 HttpMethod: GET 00:02:15.041 URL: http://10.211.164.101/packages/spdk_5c4ed23c85d81c7f5ac93453f8125f188c897471.tar.gz 00:02:15.042 Sending request to url: http://10.211.164.101/packages/spdk_5c4ed23c85d81c7f5ac93453f8125f188c897471.tar.gz 00:02:15.047 Response Code: HTTP/1.1 200 OK 00:02:15.048 Success: Status code 200 is in the accepted range: 200,404 00:02:15.049 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_5c4ed23c85d81c7f5ac93453f8125f188c897471.tar.gz 00:02:22.207 [Pipeline] } 00:02:22.224 [Pipeline] // retry 00:02:22.232 [Pipeline] sh 00:02:22.536 + tar --no-same-owner -xf spdk_5c4ed23c85d81c7f5ac93453f8125f188c897471.tar.gz 00:02:25.834 [Pipeline] sh 00:02:26.111 + git -C spdk log --oneline -n5 00:02:26.111 5c4ed23c8 util/fd_group: improve logs and documentation 00:02:26.111 ffd9f7465 bdev/nvme: Fix crash due to NULL io_path 00:02:26.111 ee513ce4a lib/reduce: If init fails, unlink meta file 00:02:26.111 5a8c76d99 lib/nvmf: Add spdk_nvmf_send_discovery_log_notice API 00:02:26.111 a70c3a90b bdev/lvol: add allocated clusters num in bdev_lvol_get_lvols 00:02:26.131 [Pipeline] writeFile 00:02:26.145 [Pipeline] sh 00:02:26.424 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:26.437 [Pipeline] sh 00:02:26.779 + cat autorun-spdk.conf 00:02:26.779 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.779 SPDK_RUN_ASAN=1 00:02:26.779 SPDK_RUN_UBSAN=1 00:02:26.779 SPDK_TEST_RAID=1 00:02:26.779 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.786 RUN_NIGHTLY=0 00:02:26.788 [Pipeline] } 00:02:26.803 [Pipeline] // stage 00:02:26.821 [Pipeline] stage 00:02:26.824 [Pipeline] { (Run VM) 00:02:26.838 [Pipeline] sh 00:02:27.118 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:27.118 + echo 'Start stage prepare_nvme.sh' 00:02:27.118 Start stage prepare_nvme.sh 00:02:27.118 + [[ -n 0 ]] 00:02:27.118 + disk_prefix=ex0 00:02:27.118 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:02:27.118 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:02:27.118 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:02:27.118 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.118 ++ SPDK_RUN_ASAN=1 00:02:27.118 ++ SPDK_RUN_UBSAN=1 00:02:27.118 ++ SPDK_TEST_RAID=1 00:02:27.118 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:27.118 ++ RUN_NIGHTLY=0 00:02:27.118 + cd /var/jenkins/workspace/raid-vg-autotest 00:02:27.118 + nvme_files=() 00:02:27.118 + declare -A nvme_files 00:02:27.118 + backend_dir=/var/lib/libvirt/images/backends 00:02:27.118 + nvme_files['nvme.img']=5G 00:02:27.118 + nvme_files['nvme-cmb.img']=5G 00:02:27.118 + nvme_files['nvme-multi0.img']=4G 00:02:27.118 + nvme_files['nvme-multi1.img']=4G 00:02:27.118 + nvme_files['nvme-multi2.img']=4G 00:02:27.118 + nvme_files['nvme-openstack.img']=8G 00:02:27.118 + nvme_files['nvme-zns.img']=5G 00:02:27.118 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:27.118 + (( SPDK_TEST_FTL == 1 )) 00:02:27.118 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:27.118 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:27.118 + for nvme in "${!nvme_files[@]}" 00:02:27.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:02:27.118 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:27.118 + for nvme in "${!nvme_files[@]}" 00:02:27.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:02:27.118 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:27.118 + for nvme in "${!nvme_files[@]}" 00:02:27.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:02:27.118 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:27.118 + for nvme in "${!nvme_files[@]}" 00:02:27.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:02:27.118 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:27.118 + for nvme in "${!nvme_files[@]}" 00:02:27.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:02:27.118 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:27.118 + for nvme in "${!nvme_files[@]}" 00:02:27.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:02:27.118 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:27.118 + for nvme in "${!nvme_files[@]}" 00:02:27.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:02:27.377 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:27.377 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:02:27.377 + echo 'End stage prepare_nvme.sh' 00:02:27.377 End stage prepare_nvme.sh 00:02:27.387 [Pipeline] sh 00:02:27.666 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:27.666 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:02:27.666 00:02:27.666 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:02:27.666 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:02:27.666 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:02:27.666 HELP=0 00:02:27.666 DRY_RUN=0 00:02:27.666 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:02:27.666 NVME_DISKS_TYPE=nvme,nvme, 00:02:27.666 NVME_AUTO_CREATE=0 00:02:27.666 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:02:27.666 NVME_CMB=,, 00:02:27.666 NVME_PMR=,, 00:02:27.666 NVME_ZNS=,, 00:02:27.666 NVME_MS=,, 00:02:27.666 NVME_FDP=,, 00:02:27.666 SPDK_VAGRANT_DISTRO=fedora39 00:02:27.666 SPDK_VAGRANT_VMCPU=10 00:02:27.666 SPDK_VAGRANT_VMRAM=12288 00:02:27.666 SPDK_VAGRANT_PROVIDER=libvirt 00:02:27.666 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:27.666 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:27.666 SPDK_OPENSTACK_NETWORK=0 00:02:27.666 VAGRANT_PACKAGE_BOX=0 00:02:27.666 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:27.666 FORCE_DISTRO=true 00:02:27.666 VAGRANT_BOX_VERSION= 00:02:27.666 EXTRA_VAGRANTFILES= 00:02:27.666 NIC_MODEL=e1000 00:02:27.666 00:02:27.666 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:02:27.666 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:02:30.199 Bringing machine 'default' up with 'libvirt' provider... 00:02:30.767 ==> default: Creating image (snapshot of base box volume). 00:02:31.026 ==> default: Creating domain with the following settings... 00:02:31.026 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1729195156_d315bcec5194cee18e84 00:02:31.026 ==> default: -- Domain type: kvm 00:02:31.026 ==> default: -- Cpus: 10 00:02:31.026 ==> default: -- Feature: acpi 00:02:31.026 ==> default: -- Feature: apic 00:02:31.026 ==> default: -- Feature: pae 00:02:31.026 ==> default: -- Memory: 12288M 00:02:31.026 ==> default: -- Memory Backing: hugepages: 00:02:31.026 ==> default: -- Management MAC: 00:02:31.026 ==> default: -- Loader: 00:02:31.026 ==> default: -- Nvram: 00:02:31.026 ==> default: -- Base box: spdk/fedora39 00:02:31.026 ==> default: -- Storage pool: default 00:02:31.026 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1729195156_d315bcec5194cee18e84.img (20G) 00:02:31.026 ==> default: -- Volume Cache: default 00:02:31.026 ==> default: -- Kernel: 00:02:31.026 ==> default: -- Initrd: 00:02:31.026 ==> default: -- Graphics Type: vnc 00:02:31.026 ==> default: -- Graphics Port: -1 00:02:31.026 ==> default: -- Graphics IP: 127.0.0.1 00:02:31.026 ==> default: -- Graphics Password: Not defined 00:02:31.026 ==> default: -- Video Type: cirrus 00:02:31.026 ==> default: -- Video VRAM: 9216 00:02:31.026 ==> default: -- Sound Type: 00:02:31.026 ==> default: -- Keymap: en-us 00:02:31.026 ==> default: -- TPM Path: 00:02:31.026 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:31.026 ==> default: -- Command line args: 00:02:31.026 ==> default: -> value=-device, 00:02:31.026 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:31.026 ==> default: -> value=-drive, 00:02:31.026 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:02:31.026 ==> default: -> value=-device, 00:02:31.026 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:31.026 ==> default: -> value=-device, 00:02:31.026 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:31.026 ==> default: -> value=-drive, 00:02:31.026 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:31.026 ==> default: -> value=-device, 00:02:31.026 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:31.026 ==> default: -> value=-drive, 00:02:31.026 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:31.026 ==> default: -> value=-device, 00:02:31.026 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:31.026 ==> default: -> value=-drive, 00:02:31.027 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:31.027 ==> default: -> value=-device, 00:02:31.027 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:31.027 ==> default: Creating shared folders metadata... 00:02:31.027 ==> default: Starting domain. 00:02:32.929 ==> default: Waiting for domain to get an IP address... 00:02:51.119 ==> default: Waiting for SSH to become available... 00:02:51.119 ==> default: Configuring and enabling network interfaces... 00:02:53.678 default: SSH address: 192.168.121.82:22 00:02:53.678 default: SSH username: vagrant 00:02:53.678 default: SSH auth method: private key 00:02:56.211 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:04.323 ==> default: Mounting SSHFS shared folder... 00:03:05.258 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:05.258 ==> default: Checking Mount.. 00:03:06.631 ==> default: Folder Successfully Mounted! 00:03:06.631 ==> default: Running provisioner: file... 00:03:07.198 default: ~/.gitconfig => .gitconfig 00:03:07.766 00:03:07.766 SUCCESS! 00:03:07.766 00:03:07.766 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:07.766 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:07.766 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:07.766 00:03:07.838 [Pipeline] } 00:03:07.856 [Pipeline] // stage 00:03:07.866 [Pipeline] dir 00:03:07.866 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:03:07.868 [Pipeline] { 00:03:07.879 [Pipeline] catchError 00:03:07.881 [Pipeline] { 00:03:07.894 [Pipeline] sh 00:03:08.173 + vagrant ssh-config --host vagrant 00:03:08.173 + sed -ne /^Host/,$p 00:03:08.173 + tee ssh_conf 00:03:10.706 Host vagrant 00:03:10.706 HostName 192.168.121.82 00:03:10.706 User vagrant 00:03:10.706 Port 22 00:03:10.706 UserKnownHostsFile /dev/null 00:03:10.706 StrictHostKeyChecking no 00:03:10.706 PasswordAuthentication no 00:03:10.707 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:10.707 IdentitiesOnly yes 00:03:10.707 LogLevel FATAL 00:03:10.707 ForwardAgent yes 00:03:10.707 ForwardX11 yes 00:03:10.707 00:03:10.720 [Pipeline] withEnv 00:03:10.723 [Pipeline] { 00:03:10.736 [Pipeline] sh 00:03:11.016 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:11.016 source /etc/os-release 00:03:11.016 [[ -e /image.version ]] && img=$(< /image.version) 00:03:11.016 # Minimal, systemd-like check. 00:03:11.016 if [[ -e /.dockerenv ]]; then 00:03:11.016 # Clear garbage from the node's name: 00:03:11.016 # agt-er_autotest_547-896 -> autotest_547-896 00:03:11.016 # $HOSTNAME is the actual container id 00:03:11.016 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:11.016 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:11.016 # We can assume this is a mount from a host where container is running, 00:03:11.016 # so fetch its hostname to easily identify the target swarm worker. 00:03:11.016 container="$(< /etc/hostname) ($agent)" 00:03:11.016 else 00:03:11.016 # Fallback 00:03:11.016 container=$agent 00:03:11.016 fi 00:03:11.016 fi 00:03:11.016 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:11.016 00:03:11.286 [Pipeline] } 00:03:11.302 [Pipeline] // withEnv 00:03:11.310 [Pipeline] setCustomBuildProperty 00:03:11.325 [Pipeline] stage 00:03:11.327 [Pipeline] { (Tests) 00:03:11.345 [Pipeline] sh 00:03:11.624 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:11.920 [Pipeline] sh 00:03:12.201 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:12.475 [Pipeline] timeout 00:03:12.475 Timeout set to expire in 1 hr 30 min 00:03:12.477 [Pipeline] { 00:03:12.492 [Pipeline] sh 00:03:12.784 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:13.351 HEAD is now at 5c4ed23c8 util/fd_group: improve logs and documentation 00:03:13.363 [Pipeline] sh 00:03:13.643 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:13.916 [Pipeline] sh 00:03:14.254 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:14.269 [Pipeline] sh 00:03:14.547 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:03:14.805 ++ readlink -f spdk_repo 00:03:14.805 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:14.805 + [[ -n /home/vagrant/spdk_repo ]] 00:03:14.805 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:14.805 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:14.805 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:14.805 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:14.805 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:14.805 + [[ raid-vg-autotest == pkgdep-* ]] 00:03:14.805 + cd /home/vagrant/spdk_repo 00:03:14.805 + source /etc/os-release 00:03:14.805 ++ NAME='Fedora Linux' 00:03:14.805 ++ VERSION='39 (Cloud Edition)' 00:03:14.805 ++ ID=fedora 00:03:14.805 ++ VERSION_ID=39 00:03:14.805 ++ VERSION_CODENAME= 00:03:14.805 ++ PLATFORM_ID=platform:f39 00:03:14.805 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:14.805 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:14.805 ++ LOGO=fedora-logo-icon 00:03:14.805 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:14.805 ++ HOME_URL=https://fedoraproject.org/ 00:03:14.805 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:14.805 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:14.805 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:14.805 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:14.805 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:14.805 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:14.805 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:14.805 ++ SUPPORT_END=2024-11-12 00:03:14.805 ++ VARIANT='Cloud Edition' 00:03:14.805 ++ VARIANT_ID=cloud 00:03:14.805 + uname -a 00:03:14.805 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:14.805 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:15.064 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:15.064 Hugepages 00:03:15.064 node hugesize free / total 00:03:15.064 node0 1048576kB 0 / 0 00:03:15.064 node0 2048kB 0 / 0 00:03:15.064 00:03:15.064 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:15.322 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:15.322 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:15.322 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:15.322 + rm -f /tmp/spdk-ld-path 00:03:15.322 + source autorun-spdk.conf 00:03:15.322 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:15.322 ++ SPDK_RUN_ASAN=1 00:03:15.322 ++ SPDK_RUN_UBSAN=1 00:03:15.322 ++ SPDK_TEST_RAID=1 00:03:15.322 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:15.322 ++ RUN_NIGHTLY=0 00:03:15.322 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:15.322 + [[ -n '' ]] 00:03:15.322 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:15.322 + for M in /var/spdk/build-*-manifest.txt 00:03:15.322 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:15.322 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:15.322 + for M in /var/spdk/build-*-manifest.txt 00:03:15.322 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:15.322 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:15.322 + for M in /var/spdk/build-*-manifest.txt 00:03:15.322 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:15.322 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:15.322 ++ uname 00:03:15.322 + [[ Linux == \L\i\n\u\x ]] 00:03:15.322 + sudo dmesg -T 00:03:15.322 + sudo dmesg --clear 00:03:15.322 + dmesg_pid=5247 00:03:15.322 + sudo dmesg -Tw 00:03:15.322 + [[ Fedora Linux == FreeBSD ]] 00:03:15.322 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:15.322 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:15.322 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:15.322 + [[ -x /usr/src/fio-static/fio ]] 00:03:15.322 + export FIO_BIN=/usr/src/fio-static/fio 00:03:15.322 + FIO_BIN=/usr/src/fio-static/fio 00:03:15.322 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:15.322 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:15.322 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:15.322 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:15.322 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:15.322 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:15.323 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:15.323 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:15.323 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:15.323 Test configuration: 00:03:15.323 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:15.323 SPDK_RUN_ASAN=1 00:03:15.323 SPDK_RUN_UBSAN=1 00:03:15.323 SPDK_TEST_RAID=1 00:03:15.323 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:15.581 RUN_NIGHTLY=0 20:00:00 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:03:15.581 20:00:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:15.581 20:00:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:15.581 20:00:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:15.581 20:00:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:15.581 20:00:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:15.581 20:00:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.581 20:00:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.581 20:00:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.581 20:00:00 -- paths/export.sh@5 -- $ export PATH 00:03:15.581 20:00:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.581 20:00:00 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:15.581 20:00:00 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:15.581 20:00:00 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729195200.XXXXXX 00:03:15.581 20:00:00 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729195200.O9g8UT 00:03:15.581 20:00:00 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:15.581 20:00:00 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:15.581 20:00:00 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:15.581 20:00:00 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:15.581 20:00:00 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:15.581 20:00:00 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:15.581 20:00:00 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:15.581 20:00:00 -- common/autotest_common.sh@10 -- $ set +x 00:03:15.581 20:00:01 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:03:15.581 20:00:01 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:15.581 20:00:01 -- pm/common@17 -- $ local monitor 00:03:15.581 20:00:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.581 20:00:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.581 20:00:01 -- pm/common@25 -- $ sleep 1 00:03:15.581 20:00:01 -- pm/common@21 -- $ date +%s 00:03:15.581 20:00:01 -- pm/common@21 -- $ date +%s 00:03:15.581 20:00:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729195201 00:03:15.581 20:00:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729195201 00:03:15.581 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729195201_collect-cpu-load.pm.log 00:03:15.581 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729195201_collect-vmstat.pm.log 00:03:16.517 20:00:02 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:16.517 20:00:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:16.517 20:00:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:16.517 20:00:02 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:16.517 20:00:02 -- spdk/autobuild.sh@16 -- $ date -u 00:03:16.517 Thu Oct 17 08:00:02 PM UTC 2024 00:03:16.517 20:00:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:16.517 v25.01-pre-73-g5c4ed23c8 00:03:16.517 20:00:02 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:16.517 20:00:02 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:16.517 20:00:02 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:16.517 20:00:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:16.517 20:00:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.517 ************************************ 00:03:16.517 START TEST asan 00:03:16.517 ************************************ 00:03:16.517 using asan 00:03:16.517 20:00:02 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:03:16.517 00:03:16.517 real 0m0.000s 00:03:16.517 user 0m0.000s 00:03:16.517 sys 0m0.000s 00:03:16.517 20:00:02 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:16.517 20:00:02 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:16.517 ************************************ 00:03:16.517 END TEST asan 00:03:16.517 ************************************ 00:03:16.517 20:00:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:16.517 20:00:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:16.517 20:00:02 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:16.517 20:00:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:16.517 20:00:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.517 ************************************ 00:03:16.517 START TEST ubsan 00:03:16.517 ************************************ 00:03:16.517 using ubsan 00:03:16.517 20:00:02 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:16.517 00:03:16.517 real 0m0.000s 00:03:16.517 user 0m0.000s 00:03:16.518 sys 0m0.000s 00:03:16.518 20:00:02 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:16.518 20:00:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:16.518 ************************************ 00:03:16.518 END TEST ubsan 00:03:16.518 ************************************ 00:03:16.518 20:00:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:16.518 20:00:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:16.518 20:00:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:16.518 20:00:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:16.518 20:00:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:16.518 20:00:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:16.518 20:00:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:16.518 20:00:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:16.518 20:00:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:03:16.776 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:16.776 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:17.344 Using 'verbs' RDMA provider 00:03:33.157 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:45.359 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:45.359 Creating mk/config.mk...done. 00:03:45.359 Creating mk/cc.flags.mk...done. 00:03:45.359 Type 'make' to build. 00:03:45.359 20:00:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:45.359 20:00:29 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:45.359 20:00:29 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:45.359 20:00:29 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.359 ************************************ 00:03:45.359 START TEST make 00:03:45.359 ************************************ 00:03:45.359 20:00:29 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:45.359 make[1]: Nothing to be done for 'all'. 00:03:57.622 The Meson build system 00:03:57.622 Version: 1.5.0 00:03:57.622 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:57.622 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:57.622 Build type: native build 00:03:57.622 Program cat found: YES (/usr/bin/cat) 00:03:57.622 Project name: DPDK 00:03:57.622 Project version: 24.03.0 00:03:57.622 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:57.622 C linker for the host machine: cc ld.bfd 2.40-14 00:03:57.622 Host machine cpu family: x86_64 00:03:57.622 Host machine cpu: x86_64 00:03:57.622 Message: ## Building in Developer Mode ## 00:03:57.622 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:57.622 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:57.622 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:57.622 Program python3 found: YES (/usr/bin/python3) 00:03:57.622 Program cat found: YES (/usr/bin/cat) 00:03:57.622 Compiler for C supports arguments -march=native: YES 00:03:57.622 Checking for size of "void *" : 8 00:03:57.622 Checking for size of "void *" : 8 (cached) 00:03:57.622 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:57.622 Library m found: YES 00:03:57.622 Library numa found: YES 00:03:57.622 Has header "numaif.h" : YES 00:03:57.622 Library fdt found: NO 00:03:57.622 Library execinfo found: NO 00:03:57.622 Has header "execinfo.h" : YES 00:03:57.622 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:57.622 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:57.622 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:57.622 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:57.622 Run-time dependency openssl found: YES 3.1.1 00:03:57.622 Run-time dependency libpcap found: YES 1.10.4 00:03:57.622 Has header "pcap.h" with dependency libpcap: YES 00:03:57.622 Compiler for C supports arguments -Wcast-qual: YES 00:03:57.622 Compiler for C supports arguments -Wdeprecated: YES 00:03:57.622 Compiler for C supports arguments -Wformat: YES 00:03:57.622 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:57.622 Compiler for C supports arguments -Wformat-security: NO 00:03:57.622 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:57.622 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:57.622 Compiler for C supports arguments -Wnested-externs: YES 00:03:57.622 Compiler for C supports arguments -Wold-style-definition: YES 00:03:57.622 Compiler for C supports arguments -Wpointer-arith: YES 00:03:57.622 Compiler for C supports arguments -Wsign-compare: YES 00:03:57.622 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:57.622 Compiler for C supports arguments -Wundef: YES 00:03:57.622 Compiler for C supports arguments -Wwrite-strings: YES 00:03:57.622 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:57.622 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:57.622 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:57.622 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:57.622 Program objdump found: YES (/usr/bin/objdump) 00:03:57.622 Compiler for C supports arguments -mavx512f: YES 00:03:57.622 Checking if "AVX512 checking" compiles: YES 00:03:57.622 Fetching value of define "__SSE4_2__" : 1 00:03:57.622 Fetching value of define "__AES__" : 1 00:03:57.622 Fetching value of define "__AVX__" : 1 00:03:57.622 Fetching value of define "__AVX2__" : 1 00:03:57.622 Fetching value of define "__AVX512BW__" : (undefined) 00:03:57.622 Fetching value of define "__AVX512CD__" : (undefined) 00:03:57.622 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:57.622 Fetching value of define "__AVX512F__" : (undefined) 00:03:57.622 Fetching value of define "__AVX512VL__" : (undefined) 00:03:57.622 Fetching value of define "__PCLMUL__" : 1 00:03:57.622 Fetching value of define "__RDRND__" : 1 00:03:57.622 Fetching value of define "__RDSEED__" : 1 00:03:57.622 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:57.622 Fetching value of define "__znver1__" : (undefined) 00:03:57.622 Fetching value of define "__znver2__" : (undefined) 00:03:57.622 Fetching value of define "__znver3__" : (undefined) 00:03:57.622 Fetching value of define "__znver4__" : (undefined) 00:03:57.622 Library asan found: YES 00:03:57.622 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:57.622 Message: lib/log: Defining dependency "log" 00:03:57.622 Message: lib/kvargs: Defining dependency "kvargs" 00:03:57.622 Message: lib/telemetry: Defining dependency "telemetry" 00:03:57.622 Library rt found: YES 00:03:57.622 Checking for function "getentropy" : NO 00:03:57.622 Message: lib/eal: Defining dependency "eal" 00:03:57.622 Message: lib/ring: Defining dependency "ring" 00:03:57.622 Message: lib/rcu: Defining dependency "rcu" 00:03:57.622 Message: lib/mempool: Defining dependency "mempool" 00:03:57.622 Message: lib/mbuf: Defining dependency "mbuf" 00:03:57.622 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:57.622 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:57.622 Compiler for C supports arguments -mpclmul: YES 00:03:57.622 Compiler for C supports arguments -maes: YES 00:03:57.622 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:57.622 Compiler for C supports arguments -mavx512bw: YES 00:03:57.622 Compiler for C supports arguments -mavx512dq: YES 00:03:57.622 Compiler for C supports arguments -mavx512vl: YES 00:03:57.622 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:57.622 Compiler for C supports arguments -mavx2: YES 00:03:57.622 Compiler for C supports arguments -mavx: YES 00:03:57.622 Message: lib/net: Defining dependency "net" 00:03:57.622 Message: lib/meter: Defining dependency "meter" 00:03:57.622 Message: lib/ethdev: Defining dependency "ethdev" 00:03:57.622 Message: lib/pci: Defining dependency "pci" 00:03:57.622 Message: lib/cmdline: Defining dependency "cmdline" 00:03:57.622 Message: lib/hash: Defining dependency "hash" 00:03:57.622 Message: lib/timer: Defining dependency "timer" 00:03:57.622 Message: lib/compressdev: Defining dependency "compressdev" 00:03:57.622 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:57.622 Message: lib/dmadev: Defining dependency "dmadev" 00:03:57.622 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:57.622 Message: lib/power: Defining dependency "power" 00:03:57.622 Message: lib/reorder: Defining dependency "reorder" 00:03:57.622 Message: lib/security: Defining dependency "security" 00:03:57.622 Has header "linux/userfaultfd.h" : YES 00:03:57.622 Has header "linux/vduse.h" : YES 00:03:57.622 Message: lib/vhost: Defining dependency "vhost" 00:03:57.622 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:57.622 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:57.622 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:57.622 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:57.622 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:57.622 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:57.622 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:57.622 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:57.622 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:57.622 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:57.622 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:57.622 Configuring doxy-api-html.conf using configuration 00:03:57.622 Configuring doxy-api-man.conf using configuration 00:03:57.622 Program mandb found: YES (/usr/bin/mandb) 00:03:57.622 Program sphinx-build found: NO 00:03:57.622 Configuring rte_build_config.h using configuration 00:03:57.622 Message: 00:03:57.622 ================= 00:03:57.622 Applications Enabled 00:03:57.622 ================= 00:03:57.622 00:03:57.622 apps: 00:03:57.622 00:03:57.622 00:03:57.622 Message: 00:03:57.622 ================= 00:03:57.622 Libraries Enabled 00:03:57.622 ================= 00:03:57.622 00:03:57.622 libs: 00:03:57.622 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:57.622 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:57.622 cryptodev, dmadev, power, reorder, security, vhost, 00:03:57.622 00:03:57.622 Message: 00:03:57.622 =============== 00:03:57.622 Drivers Enabled 00:03:57.622 =============== 00:03:57.622 00:03:57.622 common: 00:03:57.622 00:03:57.622 bus: 00:03:57.622 pci, vdev, 00:03:57.622 mempool: 00:03:57.622 ring, 00:03:57.622 dma: 00:03:57.622 00:03:57.622 net: 00:03:57.622 00:03:57.622 crypto: 00:03:57.623 00:03:57.623 compress: 00:03:57.623 00:03:57.623 vdpa: 00:03:57.623 00:03:57.623 00:03:57.623 Message: 00:03:57.623 ================= 00:03:57.623 Content Skipped 00:03:57.623 ================= 00:03:57.623 00:03:57.623 apps: 00:03:57.623 dumpcap: explicitly disabled via build config 00:03:57.623 graph: explicitly disabled via build config 00:03:57.623 pdump: explicitly disabled via build config 00:03:57.623 proc-info: explicitly disabled via build config 00:03:57.623 test-acl: explicitly disabled via build config 00:03:57.623 test-bbdev: explicitly disabled via build config 00:03:57.623 test-cmdline: explicitly disabled via build config 00:03:57.623 test-compress-perf: explicitly disabled via build config 00:03:57.623 test-crypto-perf: explicitly disabled via build config 00:03:57.623 test-dma-perf: explicitly disabled via build config 00:03:57.623 test-eventdev: explicitly disabled via build config 00:03:57.623 test-fib: explicitly disabled via build config 00:03:57.623 test-flow-perf: explicitly disabled via build config 00:03:57.623 test-gpudev: explicitly disabled via build config 00:03:57.623 test-mldev: explicitly disabled via build config 00:03:57.623 test-pipeline: explicitly disabled via build config 00:03:57.623 test-pmd: explicitly disabled via build config 00:03:57.623 test-regex: explicitly disabled via build config 00:03:57.623 test-sad: explicitly disabled via build config 00:03:57.623 test-security-perf: explicitly disabled via build config 00:03:57.623 00:03:57.623 libs: 00:03:57.623 argparse: explicitly disabled via build config 00:03:57.623 metrics: explicitly disabled via build config 00:03:57.623 acl: explicitly disabled via build config 00:03:57.623 bbdev: explicitly disabled via build config 00:03:57.623 bitratestats: explicitly disabled via build config 00:03:57.623 bpf: explicitly disabled via build config 00:03:57.623 cfgfile: explicitly disabled via build config 00:03:57.623 distributor: explicitly disabled via build config 00:03:57.623 efd: explicitly disabled via build config 00:03:57.623 eventdev: explicitly disabled via build config 00:03:57.623 dispatcher: explicitly disabled via build config 00:03:57.623 gpudev: explicitly disabled via build config 00:03:57.623 gro: explicitly disabled via build config 00:03:57.623 gso: explicitly disabled via build config 00:03:57.623 ip_frag: explicitly disabled via build config 00:03:57.623 jobstats: explicitly disabled via build config 00:03:57.623 latencystats: explicitly disabled via build config 00:03:57.623 lpm: explicitly disabled via build config 00:03:57.623 member: explicitly disabled via build config 00:03:57.623 pcapng: explicitly disabled via build config 00:03:57.623 rawdev: explicitly disabled via build config 00:03:57.623 regexdev: explicitly disabled via build config 00:03:57.623 mldev: explicitly disabled via build config 00:03:57.623 rib: explicitly disabled via build config 00:03:57.623 sched: explicitly disabled via build config 00:03:57.623 stack: explicitly disabled via build config 00:03:57.623 ipsec: explicitly disabled via build config 00:03:57.623 pdcp: explicitly disabled via build config 00:03:57.623 fib: explicitly disabled via build config 00:03:57.623 port: explicitly disabled via build config 00:03:57.623 pdump: explicitly disabled via build config 00:03:57.623 table: explicitly disabled via build config 00:03:57.623 pipeline: explicitly disabled via build config 00:03:57.623 graph: explicitly disabled via build config 00:03:57.623 node: explicitly disabled via build config 00:03:57.623 00:03:57.623 drivers: 00:03:57.623 common/cpt: not in enabled drivers build config 00:03:57.623 common/dpaax: not in enabled drivers build config 00:03:57.623 common/iavf: not in enabled drivers build config 00:03:57.623 common/idpf: not in enabled drivers build config 00:03:57.623 common/ionic: not in enabled drivers build config 00:03:57.623 common/mvep: not in enabled drivers build config 00:03:57.623 common/octeontx: not in enabled drivers build config 00:03:57.623 bus/auxiliary: not in enabled drivers build config 00:03:57.623 bus/cdx: not in enabled drivers build config 00:03:57.623 bus/dpaa: not in enabled drivers build config 00:03:57.623 bus/fslmc: not in enabled drivers build config 00:03:57.623 bus/ifpga: not in enabled drivers build config 00:03:57.623 bus/platform: not in enabled drivers build config 00:03:57.623 bus/uacce: not in enabled drivers build config 00:03:57.623 bus/vmbus: not in enabled drivers build config 00:03:57.623 common/cnxk: not in enabled drivers build config 00:03:57.623 common/mlx5: not in enabled drivers build config 00:03:57.623 common/nfp: not in enabled drivers build config 00:03:57.623 common/nitrox: not in enabled drivers build config 00:03:57.623 common/qat: not in enabled drivers build config 00:03:57.623 common/sfc_efx: not in enabled drivers build config 00:03:57.623 mempool/bucket: not in enabled drivers build config 00:03:57.623 mempool/cnxk: not in enabled drivers build config 00:03:57.623 mempool/dpaa: not in enabled drivers build config 00:03:57.623 mempool/dpaa2: not in enabled drivers build config 00:03:57.623 mempool/octeontx: not in enabled drivers build config 00:03:57.623 mempool/stack: not in enabled drivers build config 00:03:57.623 dma/cnxk: not in enabled drivers build config 00:03:57.623 dma/dpaa: not in enabled drivers build config 00:03:57.623 dma/dpaa2: not in enabled drivers build config 00:03:57.623 dma/hisilicon: not in enabled drivers build config 00:03:57.623 dma/idxd: not in enabled drivers build config 00:03:57.623 dma/ioat: not in enabled drivers build config 00:03:57.623 dma/skeleton: not in enabled drivers build config 00:03:57.623 net/af_packet: not in enabled drivers build config 00:03:57.623 net/af_xdp: not in enabled drivers build config 00:03:57.623 net/ark: not in enabled drivers build config 00:03:57.623 net/atlantic: not in enabled drivers build config 00:03:57.623 net/avp: not in enabled drivers build config 00:03:57.623 net/axgbe: not in enabled drivers build config 00:03:57.623 net/bnx2x: not in enabled drivers build config 00:03:57.623 net/bnxt: not in enabled drivers build config 00:03:57.623 net/bonding: not in enabled drivers build config 00:03:57.623 net/cnxk: not in enabled drivers build config 00:03:57.623 net/cpfl: not in enabled drivers build config 00:03:57.623 net/cxgbe: not in enabled drivers build config 00:03:57.623 net/dpaa: not in enabled drivers build config 00:03:57.623 net/dpaa2: not in enabled drivers build config 00:03:57.623 net/e1000: not in enabled drivers build config 00:03:57.623 net/ena: not in enabled drivers build config 00:03:57.623 net/enetc: not in enabled drivers build config 00:03:57.623 net/enetfec: not in enabled drivers build config 00:03:57.623 net/enic: not in enabled drivers build config 00:03:57.623 net/failsafe: not in enabled drivers build config 00:03:57.623 net/fm10k: not in enabled drivers build config 00:03:57.623 net/gve: not in enabled drivers build config 00:03:57.623 net/hinic: not in enabled drivers build config 00:03:57.623 net/hns3: not in enabled drivers build config 00:03:57.623 net/i40e: not in enabled drivers build config 00:03:57.623 net/iavf: not in enabled drivers build config 00:03:57.623 net/ice: not in enabled drivers build config 00:03:57.623 net/idpf: not in enabled drivers build config 00:03:57.623 net/igc: not in enabled drivers build config 00:03:57.623 net/ionic: not in enabled drivers build config 00:03:57.623 net/ipn3ke: not in enabled drivers build config 00:03:57.623 net/ixgbe: not in enabled drivers build config 00:03:57.623 net/mana: not in enabled drivers build config 00:03:57.623 net/memif: not in enabled drivers build config 00:03:57.623 net/mlx4: not in enabled drivers build config 00:03:57.623 net/mlx5: not in enabled drivers build config 00:03:57.623 net/mvneta: not in enabled drivers build config 00:03:57.623 net/mvpp2: not in enabled drivers build config 00:03:57.623 net/netvsc: not in enabled drivers build config 00:03:57.623 net/nfb: not in enabled drivers build config 00:03:57.623 net/nfp: not in enabled drivers build config 00:03:57.623 net/ngbe: not in enabled drivers build config 00:03:57.623 net/null: not in enabled drivers build config 00:03:57.623 net/octeontx: not in enabled drivers build config 00:03:57.623 net/octeon_ep: not in enabled drivers build config 00:03:57.623 net/pcap: not in enabled drivers build config 00:03:57.623 net/pfe: not in enabled drivers build config 00:03:57.623 net/qede: not in enabled drivers build config 00:03:57.623 net/ring: not in enabled drivers build config 00:03:57.623 net/sfc: not in enabled drivers build config 00:03:57.623 net/softnic: not in enabled drivers build config 00:03:57.623 net/tap: not in enabled drivers build config 00:03:57.623 net/thunderx: not in enabled drivers build config 00:03:57.623 net/txgbe: not in enabled drivers build config 00:03:57.623 net/vdev_netvsc: not in enabled drivers build config 00:03:57.623 net/vhost: not in enabled drivers build config 00:03:57.623 net/virtio: not in enabled drivers build config 00:03:57.623 net/vmxnet3: not in enabled drivers build config 00:03:57.623 raw/*: missing internal dependency, "rawdev" 00:03:57.623 crypto/armv8: not in enabled drivers build config 00:03:57.623 crypto/bcmfs: not in enabled drivers build config 00:03:57.623 crypto/caam_jr: not in enabled drivers build config 00:03:57.623 crypto/ccp: not in enabled drivers build config 00:03:57.623 crypto/cnxk: not in enabled drivers build config 00:03:57.623 crypto/dpaa_sec: not in enabled drivers build config 00:03:57.623 crypto/dpaa2_sec: not in enabled drivers build config 00:03:57.623 crypto/ipsec_mb: not in enabled drivers build config 00:03:57.623 crypto/mlx5: not in enabled drivers build config 00:03:57.623 crypto/mvsam: not in enabled drivers build config 00:03:57.623 crypto/nitrox: not in enabled drivers build config 00:03:57.623 crypto/null: not in enabled drivers build config 00:03:57.623 crypto/octeontx: not in enabled drivers build config 00:03:57.623 crypto/openssl: not in enabled drivers build config 00:03:57.623 crypto/scheduler: not in enabled drivers build config 00:03:57.623 crypto/uadk: not in enabled drivers build config 00:03:57.623 crypto/virtio: not in enabled drivers build config 00:03:57.623 compress/isal: not in enabled drivers build config 00:03:57.623 compress/mlx5: not in enabled drivers build config 00:03:57.623 compress/nitrox: not in enabled drivers build config 00:03:57.623 compress/octeontx: not in enabled drivers build config 00:03:57.623 compress/zlib: not in enabled drivers build config 00:03:57.623 regex/*: missing internal dependency, "regexdev" 00:03:57.623 ml/*: missing internal dependency, "mldev" 00:03:57.623 vdpa/ifc: not in enabled drivers build config 00:03:57.623 vdpa/mlx5: not in enabled drivers build config 00:03:57.623 vdpa/nfp: not in enabled drivers build config 00:03:57.623 vdpa/sfc: not in enabled drivers build config 00:03:57.623 event/*: missing internal dependency, "eventdev" 00:03:57.623 baseband/*: missing internal dependency, "bbdev" 00:03:57.623 gpu/*: missing internal dependency, "gpudev" 00:03:57.623 00:03:57.623 00:03:57.623 Build targets in project: 85 00:03:57.623 00:03:57.623 DPDK 24.03.0 00:03:57.623 00:03:57.623 User defined options 00:03:57.623 buildtype : debug 00:03:57.624 default_library : shared 00:03:57.624 libdir : lib 00:03:57.624 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:57.624 b_sanitize : address 00:03:57.624 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:57.624 c_link_args : 00:03:57.624 cpu_instruction_set: native 00:03:57.624 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:57.624 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:57.624 enable_docs : false 00:03:57.624 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:57.624 enable_kmods : false 00:03:57.624 max_lcores : 128 00:03:57.624 tests : false 00:03:57.624 00:03:57.624 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:57.624 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:57.624 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:57.624 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:57.624 [3/268] Linking static target lib/librte_kvargs.a 00:03:57.624 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:57.624 [5/268] Linking static target lib/librte_log.a 00:03:57.624 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:57.882 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.882 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:57.882 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:58.141 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:58.141 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:58.141 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:58.141 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:58.141 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:58.141 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:58.400 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.400 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:58.400 [18/268] Linking static target lib/librte_telemetry.a 00:03:58.400 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:58.400 [20/268] Linking target lib/librte_log.so.24.1 00:03:58.658 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:58.658 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:58.658 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:58.916 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:58.916 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:58.916 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:58.916 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:59.175 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:59.175 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:59.175 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.434 [31/268] Linking target lib/librte_telemetry.so.24.1 00:03:59.434 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:59.434 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:59.434 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:59.434 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:59.693 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:59.693 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:59.693 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:59.951 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:59.951 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:59.951 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:59.951 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:59.951 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:59.951 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:00.208 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:00.467 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:00.467 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:00.467 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:00.725 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:00.725 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:00.983 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:00.983 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:00.983 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:00.983 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:01.241 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:01.241 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:01.241 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:01.499 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:01.499 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:01.499 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:01.758 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:01.758 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:01.758 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:01.758 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:01.758 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:02.016 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:02.275 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:02.275 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:02.532 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:02.532 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:02.791 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:02.791 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:02.791 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:02.791 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:02.791 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:02.791 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:02.791 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:03.049 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:03.049 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:03.049 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:03.307 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:03.307 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:03.307 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:03.307 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:03.307 [85/268] Linking static target lib/librte_eal.a 00:04:03.566 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:03.825 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:03.825 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:03.825 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:03.825 [90/268] Linking static target lib/librte_ring.a 00:04:03.825 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:03.825 [92/268] Linking static target lib/librte_rcu.a 00:04:04.083 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:04.083 [94/268] Linking static target lib/librte_mempool.a 00:04:04.343 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:04.343 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:04.343 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:04.343 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:04.343 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:04.343 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.343 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.909 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:04.909 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:04.909 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:04.909 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:04.909 [106/268] Linking static target lib/librte_mbuf.a 00:04:05.168 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:05.168 [108/268] Linking static target lib/librte_meter.a 00:04:05.168 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:05.426 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:05.426 [111/268] Linking static target lib/librte_net.a 00:04:05.426 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.426 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:05.426 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.685 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:05.685 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.943 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:06.201 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:06.201 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.528 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:06.528 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:06.786 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:07.353 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:07.353 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:07.353 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:07.353 [126/268] Linking static target lib/librte_pci.a 00:04:07.611 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:07.870 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:07.870 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:07.870 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:07.870 [131/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.870 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:07.870 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:07.870 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:07.870 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:07.870 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:07.870 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:08.129 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:08.129 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:08.129 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:08.129 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:08.129 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:08.129 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:08.129 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:08.387 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:08.645 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:08.645 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:08.645 [148/268] Linking static target lib/librte_cmdline.a 00:04:08.903 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:08.903 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:08.903 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:08.903 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:08.903 [153/268] Linking static target lib/librte_timer.a 00:04:09.161 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:09.161 [155/268] Linking static target lib/librte_ethdev.a 00:04:09.419 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:09.419 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:09.677 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:09.677 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:09.677 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.677 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:09.935 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:09.935 [163/268] Linking static target lib/librte_compressdev.a 00:04:09.935 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:09.935 [165/268] Linking static target lib/librte_hash.a 00:04:10.192 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:10.192 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:10.192 [168/268] Linking static target lib/librte_dmadev.a 00:04:10.192 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:10.454 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.799 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:10.799 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:10.799 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:10.799 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.056 [175/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.056 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:11.056 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.313 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:11.314 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:11.314 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:11.314 [181/268] Linking static target lib/librte_cryptodev.a 00:04:11.314 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:11.314 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:11.314 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:11.878 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:11.878 [186/268] Linking static target lib/librte_power.a 00:04:11.878 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:12.136 [188/268] Linking static target lib/librte_reorder.a 00:04:12.136 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:12.136 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:12.393 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:12.393 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:12.393 [193/268] Linking static target lib/librte_security.a 00:04:12.651 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.651 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:13.215 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.215 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.215 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:13.473 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:13.473 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:13.473 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:14.038 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.038 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:14.038 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:14.297 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:14.297 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:14.297 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:14.297 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:14.297 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:14.555 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:14.555 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:14.813 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:14.813 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:14.813 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:14.813 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:14.813 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:14.813 [217/268] Linking static target drivers/librte_bus_pci.a 00:04:14.813 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:14.813 [219/268] Linking static target drivers/librte_bus_vdev.a 00:04:14.813 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:15.071 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:15.071 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:15.071 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.071 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:15.071 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:15.071 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:15.329 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.894 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.894 [229/268] Linking target lib/librte_eal.so.24.1 00:04:15.894 [230/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:16.151 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:16.151 [232/268] Linking target lib/librte_meter.so.24.1 00:04:16.151 [233/268] Linking target lib/librte_pci.so.24.1 00:04:16.151 [234/268] Linking target lib/librte_ring.so.24.1 00:04:16.151 [235/268] Linking target lib/librte_dmadev.so.24.1 00:04:16.151 [236/268] Linking target lib/librte_timer.so.24.1 00:04:16.151 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:16.151 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:16.151 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:16.151 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:16.408 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:16.408 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:16.408 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:16.408 [244/268] Linking target lib/librte_rcu.so.24.1 00:04:16.408 [245/268] Linking target lib/librte_mempool.so.24.1 00:04:16.408 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:16.408 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:16.408 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:16.408 [249/268] Linking target lib/librte_mbuf.so.24.1 00:04:16.666 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:16.666 [251/268] Linking target lib/librte_reorder.so.24.1 00:04:16.666 [252/268] Linking target lib/librte_compressdev.so.24.1 00:04:16.666 [253/268] Linking target lib/librte_net.so.24.1 00:04:16.666 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:04:16.924 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:16.924 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:16.924 [257/268] Linking target lib/librte_cmdline.so.24.1 00:04:16.924 [258/268] Linking target lib/librte_hash.so.24.1 00:04:16.924 [259/268] Linking target lib/librte_security.so.24.1 00:04:16.925 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:17.182 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.440 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:17.440 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:17.440 [264/268] Linking target lib/librte_power.so.24.1 00:04:20.734 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:20.734 [266/268] Linking static target lib/librte_vhost.a 00:04:22.117 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.376 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:22.376 INFO: autodetecting backend as ninja 00:04:22.376 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:44.293 CC lib/ut_mock/mock.o 00:04:44.293 CC lib/log/log.o 00:04:44.293 CC lib/log/log_deprecated.o 00:04:44.293 CC lib/log/log_flags.o 00:04:44.293 CC lib/ut/ut.o 00:04:44.293 LIB libspdk_ut_mock.a 00:04:44.293 LIB libspdk_log.a 00:04:44.293 SO libspdk_ut_mock.so.6.0 00:04:44.293 LIB libspdk_ut.a 00:04:44.293 SO libspdk_log.so.7.1 00:04:44.293 SO libspdk_ut.so.2.0 00:04:44.293 SYMLINK libspdk_ut_mock.so 00:04:44.293 SYMLINK libspdk_log.so 00:04:44.293 SYMLINK libspdk_ut.so 00:04:44.293 CC lib/util/base64.o 00:04:44.293 CC lib/util/cpuset.o 00:04:44.293 CC lib/util/bit_array.o 00:04:44.293 CC lib/util/crc16.o 00:04:44.293 CC lib/util/crc32.o 00:04:44.293 CC lib/util/crc32c.o 00:04:44.293 CXX lib/trace_parser/trace.o 00:04:44.293 CC lib/dma/dma.o 00:04:44.293 CC lib/ioat/ioat.o 00:04:44.293 CC lib/vfio_user/host/vfio_user_pci.o 00:04:44.293 CC lib/vfio_user/host/vfio_user.o 00:04:44.293 CC lib/util/crc32_ieee.o 00:04:44.293 CC lib/util/crc64.o 00:04:44.293 CC lib/util/dif.o 00:04:44.293 CC lib/util/fd.o 00:04:44.293 LIB libspdk_dma.a 00:04:44.293 SO libspdk_dma.so.5.0 00:04:44.293 CC lib/util/fd_group.o 00:04:44.293 CC lib/util/file.o 00:04:44.293 SYMLINK libspdk_dma.so 00:04:44.293 CC lib/util/hexlify.o 00:04:44.293 CC lib/util/iov.o 00:04:44.293 LIB libspdk_ioat.a 00:04:44.293 CC lib/util/math.o 00:04:44.293 CC lib/util/net.o 00:04:44.293 SO libspdk_ioat.so.7.0 00:04:44.293 LIB libspdk_vfio_user.a 00:04:44.293 CC lib/util/pipe.o 00:04:44.293 SO libspdk_vfio_user.so.5.0 00:04:44.293 SYMLINK libspdk_ioat.so 00:04:44.293 CC lib/util/strerror_tls.o 00:04:44.293 CC lib/util/string.o 00:04:44.293 CC lib/util/uuid.o 00:04:44.293 SYMLINK libspdk_vfio_user.so 00:04:44.293 CC lib/util/xor.o 00:04:44.293 CC lib/util/zipf.o 00:04:44.293 CC lib/util/md5.o 00:04:44.293 LIB libspdk_util.a 00:04:44.293 SO libspdk_util.so.10.0 00:04:44.293 LIB libspdk_trace_parser.a 00:04:44.293 SYMLINK libspdk_util.so 00:04:44.293 SO libspdk_trace_parser.so.6.0 00:04:44.293 SYMLINK libspdk_trace_parser.so 00:04:44.293 CC lib/vmd/vmd.o 00:04:44.294 CC lib/vmd/led.o 00:04:44.294 CC lib/rdma_utils/rdma_utils.o 00:04:44.294 CC lib/env_dpdk/env.o 00:04:44.294 CC lib/env_dpdk/memory.o 00:04:44.294 CC lib/env_dpdk/pci.o 00:04:44.294 CC lib/rdma_provider/common.o 00:04:44.294 CC lib/conf/conf.o 00:04:44.294 CC lib/idxd/idxd.o 00:04:44.294 CC lib/json/json_parse.o 00:04:44.294 CC lib/json/json_util.o 00:04:44.294 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:44.294 LIB libspdk_conf.a 00:04:44.294 SO libspdk_conf.so.6.0 00:04:44.294 CC lib/json/json_write.o 00:04:44.294 SYMLINK libspdk_conf.so 00:04:44.294 CC lib/env_dpdk/init.o 00:04:44.294 LIB libspdk_rdma_utils.a 00:04:44.294 SO libspdk_rdma_utils.so.1.0 00:04:44.294 LIB libspdk_rdma_provider.a 00:04:44.294 CC lib/env_dpdk/threads.o 00:04:44.294 SO libspdk_rdma_provider.so.6.0 00:04:44.294 SYMLINK libspdk_rdma_utils.so 00:04:44.294 CC lib/idxd/idxd_user.o 00:04:44.294 CC lib/env_dpdk/pci_ioat.o 00:04:44.294 SYMLINK libspdk_rdma_provider.so 00:04:44.294 CC lib/env_dpdk/pci_virtio.o 00:04:44.294 CC lib/env_dpdk/pci_vmd.o 00:04:44.294 CC lib/env_dpdk/pci_idxd.o 00:04:44.294 LIB libspdk_json.a 00:04:44.294 CC lib/env_dpdk/pci_event.o 00:04:44.294 SO libspdk_json.so.6.0 00:04:44.294 CC lib/env_dpdk/sigbus_handler.o 00:04:44.294 SYMLINK libspdk_json.so 00:04:44.294 CC lib/idxd/idxd_kernel.o 00:04:44.294 CC lib/env_dpdk/pci_dpdk.o 00:04:44.294 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:44.294 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:44.294 CC lib/jsonrpc/jsonrpc_server.o 00:04:44.294 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:44.294 CC lib/jsonrpc/jsonrpc_client.o 00:04:44.294 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:44.294 LIB libspdk_idxd.a 00:04:44.294 SO libspdk_idxd.so.12.1 00:04:44.294 LIB libspdk_vmd.a 00:04:44.294 SYMLINK libspdk_idxd.so 00:04:44.294 SO libspdk_vmd.so.6.0 00:04:44.294 SYMLINK libspdk_vmd.so 00:04:44.294 LIB libspdk_jsonrpc.a 00:04:44.294 SO libspdk_jsonrpc.so.6.0 00:04:44.294 SYMLINK libspdk_jsonrpc.so 00:04:44.552 CC lib/rpc/rpc.o 00:04:44.552 LIB libspdk_env_dpdk.a 00:04:44.810 LIB libspdk_rpc.a 00:04:44.810 SO libspdk_env_dpdk.so.15.0 00:04:44.810 SO libspdk_rpc.so.6.0 00:04:44.810 SYMLINK libspdk_rpc.so 00:04:44.810 SYMLINK libspdk_env_dpdk.so 00:04:45.069 CC lib/notify/notify.o 00:04:45.069 CC lib/notify/notify_rpc.o 00:04:45.069 CC lib/trace/trace.o 00:04:45.069 CC lib/trace/trace_flags.o 00:04:45.069 CC lib/keyring/keyring_rpc.o 00:04:45.069 CC lib/keyring/keyring.o 00:04:45.069 CC lib/trace/trace_rpc.o 00:04:45.327 LIB libspdk_notify.a 00:04:45.327 SO libspdk_notify.so.6.0 00:04:45.327 SYMLINK libspdk_notify.so 00:04:45.327 LIB libspdk_keyring.a 00:04:45.327 LIB libspdk_trace.a 00:04:45.585 SO libspdk_keyring.so.2.0 00:04:45.585 SO libspdk_trace.so.11.0 00:04:45.585 SYMLINK libspdk_keyring.so 00:04:45.585 SYMLINK libspdk_trace.so 00:04:45.843 CC lib/thread/iobuf.o 00:04:45.843 CC lib/thread/thread.o 00:04:45.843 CC lib/sock/sock.o 00:04:45.843 CC lib/sock/sock_rpc.o 00:04:46.410 LIB libspdk_sock.a 00:04:46.410 SO libspdk_sock.so.10.0 00:04:46.677 SYMLINK libspdk_sock.so 00:04:46.950 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:46.950 CC lib/nvme/nvme_fabric.o 00:04:46.950 CC lib/nvme/nvme_ctrlr.o 00:04:46.950 CC lib/nvme/nvme_ns.o 00:04:46.950 CC lib/nvme/nvme_ns_cmd.o 00:04:46.950 CC lib/nvme/nvme_pcie.o 00:04:46.950 CC lib/nvme/nvme_qpair.o 00:04:46.950 CC lib/nvme/nvme_pcie_common.o 00:04:46.950 CC lib/nvme/nvme.o 00:04:47.886 CC lib/nvme/nvme_quirks.o 00:04:47.886 CC lib/nvme/nvme_transport.o 00:04:47.886 CC lib/nvme/nvme_discovery.o 00:04:47.886 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:47.886 LIB libspdk_thread.a 00:04:47.886 SO libspdk_thread.so.10.2 00:04:47.886 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:47.886 SYMLINK libspdk_thread.so 00:04:47.886 CC lib/nvme/nvme_tcp.o 00:04:47.886 CC lib/nvme/nvme_opal.o 00:04:47.886 CC lib/nvme/nvme_io_msg.o 00:04:48.144 CC lib/nvme/nvme_poll_group.o 00:04:48.403 CC lib/nvme/nvme_zns.o 00:04:48.403 CC lib/nvme/nvme_stubs.o 00:04:48.403 CC lib/accel/accel.o 00:04:48.661 CC lib/accel/accel_rpc.o 00:04:48.661 CC lib/nvme/nvme_auth.o 00:04:48.919 CC lib/accel/accel_sw.o 00:04:48.919 CC lib/blob/blobstore.o 00:04:48.919 CC lib/init/json_config.o 00:04:48.919 CC lib/init/subsystem.o 00:04:48.919 CC lib/init/subsystem_rpc.o 00:04:49.177 CC lib/init/rpc.o 00:04:49.177 CC lib/nvme/nvme_cuse.o 00:04:49.177 CC lib/nvme/nvme_rdma.o 00:04:49.177 CC lib/blob/request.o 00:04:49.177 LIB libspdk_init.a 00:04:49.435 SO libspdk_init.so.6.0 00:04:49.435 CC lib/virtio/virtio.o 00:04:49.435 SYMLINK libspdk_init.so 00:04:49.694 CC lib/fsdev/fsdev.o 00:04:49.694 CC lib/fsdev/fsdev_io.o 00:04:49.694 CC lib/fsdev/fsdev_rpc.o 00:04:49.694 CC lib/virtio/virtio_vhost_user.o 00:04:49.694 CC lib/virtio/virtio_vfio_user.o 00:04:49.953 CC lib/virtio/virtio_pci.o 00:04:49.953 LIB libspdk_accel.a 00:04:49.953 CC lib/event/app.o 00:04:49.953 SO libspdk_accel.so.16.0 00:04:49.953 CC lib/event/reactor.o 00:04:49.953 CC lib/event/log_rpc.o 00:04:50.212 SYMLINK libspdk_accel.so 00:04:50.212 CC lib/event/app_rpc.o 00:04:50.212 CC lib/blob/zeroes.o 00:04:50.212 CC lib/event/scheduler_static.o 00:04:50.212 LIB libspdk_virtio.a 00:04:50.470 SO libspdk_virtio.so.7.0 00:04:50.470 LIB libspdk_fsdev.a 00:04:50.470 CC lib/bdev/bdev.o 00:04:50.470 SO libspdk_fsdev.so.1.0 00:04:50.470 SYMLINK libspdk_virtio.so 00:04:50.470 CC lib/bdev/bdev_rpc.o 00:04:50.470 CC lib/blob/blob_bs_dev.o 00:04:50.470 CC lib/bdev/bdev_zone.o 00:04:50.470 CC lib/bdev/part.o 00:04:50.470 SYMLINK libspdk_fsdev.so 00:04:50.470 CC lib/bdev/scsi_nvme.o 00:04:50.728 LIB libspdk_event.a 00:04:50.728 SO libspdk_event.so.14.0 00:04:50.728 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:50.728 SYMLINK libspdk_event.so 00:04:50.987 LIB libspdk_nvme.a 00:04:51.245 SO libspdk_nvme.so.14.0 00:04:51.503 SYMLINK libspdk_nvme.so 00:04:51.762 LIB libspdk_fuse_dispatcher.a 00:04:51.762 SO libspdk_fuse_dispatcher.so.1.0 00:04:51.762 SYMLINK libspdk_fuse_dispatcher.so 00:04:53.662 LIB libspdk_blob.a 00:04:53.662 SO libspdk_blob.so.11.0 00:04:53.662 SYMLINK libspdk_blob.so 00:04:53.921 CC lib/lvol/lvol.o 00:04:53.921 CC lib/blobfs/blobfs.o 00:04:53.921 CC lib/blobfs/tree.o 00:04:54.223 LIB libspdk_bdev.a 00:04:54.223 SO libspdk_bdev.so.17.0 00:04:54.223 SYMLINK libspdk_bdev.so 00:04:54.481 CC lib/ftl/ftl_core.o 00:04:54.481 CC lib/ftl/ftl_init.o 00:04:54.481 CC lib/ftl/ftl_layout.o 00:04:54.481 CC lib/ftl/ftl_debug.o 00:04:54.481 CC lib/scsi/dev.o 00:04:54.481 CC lib/nvmf/ctrlr.o 00:04:54.481 CC lib/nbd/nbd.o 00:04:54.481 CC lib/ublk/ublk.o 00:04:54.740 CC lib/scsi/lun.o 00:04:54.740 CC lib/scsi/port.o 00:04:54.999 CC lib/ftl/ftl_io.o 00:04:54.999 CC lib/ublk/ublk_rpc.o 00:04:54.999 CC lib/nvmf/ctrlr_discovery.o 00:04:54.999 LIB libspdk_blobfs.a 00:04:54.999 CC lib/nvmf/ctrlr_bdev.o 00:04:54.999 SO libspdk_blobfs.so.10.0 00:04:54.999 CC lib/nbd/nbd_rpc.o 00:04:55.257 CC lib/scsi/scsi.o 00:04:55.257 CC lib/ftl/ftl_sb.o 00:04:55.257 CC lib/ftl/ftl_l2p.o 00:04:55.257 SYMLINK libspdk_blobfs.so 00:04:55.257 CC lib/ftl/ftl_l2p_flat.o 00:04:55.257 LIB libspdk_lvol.a 00:04:55.257 LIB libspdk_nbd.a 00:04:55.257 SO libspdk_lvol.so.10.0 00:04:55.257 CC lib/scsi/scsi_bdev.o 00:04:55.257 SO libspdk_nbd.so.7.0 00:04:55.515 CC lib/scsi/scsi_pr.o 00:04:55.515 LIB libspdk_ublk.a 00:04:55.515 SYMLINK libspdk_lvol.so 00:04:55.515 SYMLINK libspdk_nbd.so 00:04:55.515 CC lib/ftl/ftl_nv_cache.o 00:04:55.515 CC lib/scsi/scsi_rpc.o 00:04:55.515 CC lib/scsi/task.o 00:04:55.515 CC lib/ftl/ftl_band.o 00:04:55.515 SO libspdk_ublk.so.3.0 00:04:55.515 SYMLINK libspdk_ublk.so 00:04:55.515 CC lib/ftl/ftl_band_ops.o 00:04:55.515 CC lib/ftl/ftl_writer.o 00:04:55.773 CC lib/nvmf/subsystem.o 00:04:55.773 CC lib/nvmf/nvmf.o 00:04:55.773 CC lib/nvmf/nvmf_rpc.o 00:04:55.773 CC lib/ftl/ftl_rq.o 00:04:56.032 CC lib/nvmf/transport.o 00:04:56.032 LIB libspdk_scsi.a 00:04:56.032 CC lib/nvmf/tcp.o 00:04:56.032 CC lib/nvmf/stubs.o 00:04:56.032 SO libspdk_scsi.so.9.0 00:04:56.032 CC lib/nvmf/mdns_server.o 00:04:56.032 SYMLINK libspdk_scsi.so 00:04:56.032 CC lib/nvmf/rdma.o 00:04:56.599 CC lib/nvmf/auth.o 00:04:56.599 CC lib/ftl/ftl_reloc.o 00:04:56.599 CC lib/ftl/ftl_l2p_cache.o 00:04:56.857 CC lib/ftl/ftl_p2l.o 00:04:57.116 CC lib/ftl/ftl_p2l_log.o 00:04:57.116 CC lib/iscsi/conn.o 00:04:57.116 CC lib/vhost/vhost.o 00:04:57.116 CC lib/iscsi/init_grp.o 00:04:57.374 CC lib/ftl/mngt/ftl_mngt.o 00:04:57.374 CC lib/iscsi/iscsi.o 00:04:57.374 CC lib/iscsi/param.o 00:04:57.374 CC lib/iscsi/portal_grp.o 00:04:57.374 CC lib/iscsi/tgt_node.o 00:04:57.633 CC lib/vhost/vhost_rpc.o 00:04:57.633 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:57.633 CC lib/iscsi/iscsi_subsystem.o 00:04:57.910 CC lib/iscsi/iscsi_rpc.o 00:04:57.910 CC lib/iscsi/task.o 00:04:57.911 CC lib/vhost/vhost_scsi.o 00:04:57.911 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:57.911 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:57.911 CC lib/vhost/vhost_blk.o 00:04:58.171 CC lib/vhost/rte_vhost_user.o 00:04:58.171 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:58.171 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:58.171 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:58.429 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:58.429 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:58.429 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:58.429 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:58.429 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:58.688 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:58.688 CC lib/ftl/utils/ftl_conf.o 00:04:58.688 CC lib/ftl/utils/ftl_md.o 00:04:58.688 CC lib/ftl/utils/ftl_mempool.o 00:04:58.688 CC lib/ftl/utils/ftl_bitmap.o 00:04:58.946 CC lib/ftl/utils/ftl_property.o 00:04:58.946 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:58.946 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:58.946 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:58.946 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:59.204 LIB libspdk_nvmf.a 00:04:59.204 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:59.204 SO libspdk_nvmf.so.19.1 00:04:59.204 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:59.204 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:59.204 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:59.204 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:59.204 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:59.204 LIB libspdk_iscsi.a 00:04:59.462 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:59.462 LIB libspdk_vhost.a 00:04:59.462 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:59.462 SO libspdk_iscsi.so.8.0 00:04:59.462 SO libspdk_vhost.so.8.0 00:04:59.462 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:59.462 CC lib/ftl/base/ftl_base_dev.o 00:04:59.462 CC lib/ftl/base/ftl_base_bdev.o 00:04:59.462 CC lib/ftl/ftl_trace.o 00:04:59.462 SYMLINK libspdk_nvmf.so 00:04:59.462 SYMLINK libspdk_vhost.so 00:04:59.719 SYMLINK libspdk_iscsi.so 00:04:59.719 LIB libspdk_ftl.a 00:04:59.977 SO libspdk_ftl.so.9.0 00:05:00.544 SYMLINK libspdk_ftl.so 00:05:00.803 CC module/env_dpdk/env_dpdk_rpc.o 00:05:00.803 CC module/accel/iaa/accel_iaa.o 00:05:00.803 CC module/accel/error/accel_error.o 00:05:00.803 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:00.803 CC module/sock/posix/posix.o 00:05:00.803 CC module/blob/bdev/blob_bdev.o 00:05:00.803 CC module/accel/ioat/accel_ioat.o 00:05:00.803 CC module/accel/dsa/accel_dsa.o 00:05:00.803 CC module/fsdev/aio/fsdev_aio.o 00:05:00.803 CC module/keyring/file/keyring.o 00:05:01.062 LIB libspdk_env_dpdk_rpc.a 00:05:01.062 SO libspdk_env_dpdk_rpc.so.6.0 00:05:01.062 SYMLINK libspdk_env_dpdk_rpc.so 00:05:01.062 CC module/keyring/file/keyring_rpc.o 00:05:01.062 CC module/accel/iaa/accel_iaa_rpc.o 00:05:01.062 CC module/accel/error/accel_error_rpc.o 00:05:01.062 LIB libspdk_scheduler_dynamic.a 00:05:01.062 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:01.062 SO libspdk_scheduler_dynamic.so.4.0 00:05:01.062 CC module/accel/ioat/accel_ioat_rpc.o 00:05:01.320 SYMLINK libspdk_scheduler_dynamic.so 00:05:01.320 LIB libspdk_blob_bdev.a 00:05:01.320 LIB libspdk_accel_iaa.a 00:05:01.320 LIB libspdk_keyring_file.a 00:05:01.320 LIB libspdk_accel_error.a 00:05:01.320 SO libspdk_blob_bdev.so.11.0 00:05:01.320 SO libspdk_accel_iaa.so.3.0 00:05:01.320 SO libspdk_keyring_file.so.2.0 00:05:01.320 SO libspdk_accel_error.so.2.0 00:05:01.320 LIB libspdk_accel_ioat.a 00:05:01.320 CC module/accel/dsa/accel_dsa_rpc.o 00:05:01.320 SYMLINK libspdk_blob_bdev.so 00:05:01.320 SYMLINK libspdk_accel_iaa.so 00:05:01.320 SO libspdk_accel_ioat.so.6.0 00:05:01.320 CC module/fsdev/aio/linux_aio_mgr.o 00:05:01.320 SYMLINK libspdk_keyring_file.so 00:05:01.320 SYMLINK libspdk_accel_error.so 00:05:01.320 SYMLINK libspdk_accel_ioat.so 00:05:01.320 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:01.320 LIB libspdk_accel_dsa.a 00:05:01.579 SO libspdk_accel_dsa.so.5.0 00:05:01.579 CC module/keyring/linux/keyring.o 00:05:01.579 CC module/scheduler/gscheduler/gscheduler.o 00:05:01.579 SYMLINK libspdk_accel_dsa.so 00:05:01.579 CC module/keyring/linux/keyring_rpc.o 00:05:01.579 LIB libspdk_scheduler_dpdk_governor.a 00:05:01.579 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:01.579 CC module/bdev/delay/vbdev_delay.o 00:05:01.579 CC module/bdev/error/vbdev_error.o 00:05:01.579 CC module/blobfs/bdev/blobfs_bdev.o 00:05:01.579 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:01.579 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:01.579 LIB libspdk_keyring_linux.a 00:05:01.579 LIB libspdk_scheduler_gscheduler.a 00:05:01.836 SO libspdk_scheduler_gscheduler.so.4.0 00:05:01.836 SO libspdk_keyring_linux.so.1.0 00:05:01.836 LIB libspdk_fsdev_aio.a 00:05:01.836 CC module/bdev/gpt/gpt.o 00:05:01.836 SO libspdk_fsdev_aio.so.1.0 00:05:01.836 SYMLINK libspdk_scheduler_gscheduler.so 00:05:01.836 SYMLINK libspdk_keyring_linux.so 00:05:01.836 CC module/bdev/gpt/vbdev_gpt.o 00:05:01.836 CC module/bdev/error/vbdev_error_rpc.o 00:05:01.836 LIB libspdk_sock_posix.a 00:05:01.836 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:01.836 SO libspdk_sock_posix.so.6.0 00:05:01.836 CC module/bdev/lvol/vbdev_lvol.o 00:05:01.836 SYMLINK libspdk_fsdev_aio.so 00:05:01.836 SYMLINK libspdk_sock_posix.so 00:05:02.132 LIB libspdk_bdev_error.a 00:05:02.132 LIB libspdk_blobfs_bdev.a 00:05:02.132 SO libspdk_bdev_error.so.6.0 00:05:02.132 LIB libspdk_bdev_delay.a 00:05:02.132 SO libspdk_blobfs_bdev.so.6.0 00:05:02.132 CC module/bdev/malloc/bdev_malloc.o 00:05:02.132 SO libspdk_bdev_delay.so.6.0 00:05:02.132 CC module/bdev/null/bdev_null.o 00:05:02.132 SYMLINK libspdk_bdev_error.so 00:05:02.132 LIB libspdk_bdev_gpt.a 00:05:02.132 SYMLINK libspdk_blobfs_bdev.so 00:05:02.132 CC module/bdev/nvme/bdev_nvme.o 00:05:02.132 CC module/bdev/passthru/vbdev_passthru.o 00:05:02.132 CC module/bdev/raid/bdev_raid.o 00:05:02.132 SO libspdk_bdev_gpt.so.6.0 00:05:02.132 SYMLINK libspdk_bdev_delay.so 00:05:02.132 CC module/bdev/null/bdev_null_rpc.o 00:05:02.391 SYMLINK libspdk_bdev_gpt.so 00:05:02.391 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:02.391 CC module/bdev/split/vbdev_split.o 00:05:02.391 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:02.391 CC module/bdev/split/vbdev_split_rpc.o 00:05:02.391 LIB libspdk_bdev_null.a 00:05:02.391 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:02.391 SO libspdk_bdev_null.so.6.0 00:05:02.649 SYMLINK libspdk_bdev_null.so 00:05:02.649 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:02.649 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:02.649 LIB libspdk_bdev_split.a 00:05:02.649 SO libspdk_bdev_split.so.6.0 00:05:02.649 LIB libspdk_bdev_passthru.a 00:05:02.649 SO libspdk_bdev_passthru.so.6.0 00:05:02.649 CC module/bdev/aio/bdev_aio.o 00:05:02.649 LIB libspdk_bdev_lvol.a 00:05:02.649 SYMLINK libspdk_bdev_split.so 00:05:02.649 CC module/bdev/ftl/bdev_ftl.o 00:05:02.649 SO libspdk_bdev_lvol.so.6.0 00:05:02.649 LIB libspdk_bdev_zone_block.a 00:05:02.649 LIB libspdk_bdev_malloc.a 00:05:02.906 SYMLINK libspdk_bdev_passthru.so 00:05:02.906 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:02.906 SO libspdk_bdev_zone_block.so.6.0 00:05:02.906 SO libspdk_bdev_malloc.so.6.0 00:05:02.906 SYMLINK libspdk_bdev_lvol.so 00:05:02.906 CC module/bdev/aio/bdev_aio_rpc.o 00:05:02.906 SYMLINK libspdk_bdev_zone_block.so 00:05:02.906 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:02.906 SYMLINK libspdk_bdev_malloc.so 00:05:02.906 CC module/bdev/iscsi/bdev_iscsi.o 00:05:02.906 CC module/bdev/nvme/nvme_rpc.o 00:05:02.906 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:02.906 CC module/bdev/raid/bdev_raid_rpc.o 00:05:02.906 CC module/bdev/raid/bdev_raid_sb.o 00:05:03.164 LIB libspdk_bdev_ftl.a 00:05:03.164 SO libspdk_bdev_ftl.so.6.0 00:05:03.164 LIB libspdk_bdev_aio.a 00:05:03.164 SYMLINK libspdk_bdev_ftl.so 00:05:03.164 CC module/bdev/raid/raid0.o 00:05:03.164 CC module/bdev/raid/raid1.o 00:05:03.164 SO libspdk_bdev_aio.so.6.0 00:05:03.164 SYMLINK libspdk_bdev_aio.so 00:05:03.164 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:03.164 CC module/bdev/raid/concat.o 00:05:03.422 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:03.422 CC module/bdev/raid/raid5f.o 00:05:03.422 LIB libspdk_bdev_iscsi.a 00:05:03.422 CC module/bdev/nvme/bdev_mdns_client.o 00:05:03.422 SO libspdk_bdev_iscsi.so.6.0 00:05:03.422 CC module/bdev/nvme/vbdev_opal.o 00:05:03.422 SYMLINK libspdk_bdev_iscsi.so 00:05:03.422 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:03.422 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:03.422 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:03.681 LIB libspdk_bdev_virtio.a 00:05:03.939 SO libspdk_bdev_virtio.so.6.0 00:05:03.939 SYMLINK libspdk_bdev_virtio.so 00:05:03.939 LIB libspdk_bdev_raid.a 00:05:04.198 SO libspdk_bdev_raid.so.6.0 00:05:04.198 SYMLINK libspdk_bdev_raid.so 00:05:05.133 LIB libspdk_bdev_nvme.a 00:05:05.391 SO libspdk_bdev_nvme.so.7.0 00:05:05.391 SYMLINK libspdk_bdev_nvme.so 00:05:05.956 CC module/event/subsystems/fsdev/fsdev.o 00:05:05.956 CC module/event/subsystems/vmd/vmd.o 00:05:05.956 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:05.956 CC module/event/subsystems/keyring/keyring.o 00:05:05.956 CC module/event/subsystems/sock/sock.o 00:05:05.956 CC module/event/subsystems/scheduler/scheduler.o 00:05:05.956 CC module/event/subsystems/iobuf/iobuf.o 00:05:05.956 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:05.956 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:06.214 LIB libspdk_event_sock.a 00:05:06.214 LIB libspdk_event_fsdev.a 00:05:06.214 LIB libspdk_event_keyring.a 00:05:06.214 LIB libspdk_event_vhost_blk.a 00:05:06.214 LIB libspdk_event_vmd.a 00:05:06.214 LIB libspdk_event_scheduler.a 00:05:06.214 SO libspdk_event_fsdev.so.1.0 00:05:06.214 SO libspdk_event_keyring.so.1.0 00:05:06.214 SO libspdk_event_sock.so.5.0 00:05:06.214 SO libspdk_event_vhost_blk.so.3.0 00:05:06.214 LIB libspdk_event_iobuf.a 00:05:06.214 SO libspdk_event_vmd.so.6.0 00:05:06.214 SO libspdk_event_scheduler.so.4.0 00:05:06.214 SO libspdk_event_iobuf.so.3.0 00:05:06.214 SYMLINK libspdk_event_keyring.so 00:05:06.214 SYMLINK libspdk_event_vhost_blk.so 00:05:06.214 SYMLINK libspdk_event_fsdev.so 00:05:06.214 SYMLINK libspdk_event_sock.so 00:05:06.214 SYMLINK libspdk_event_vmd.so 00:05:06.214 SYMLINK libspdk_event_scheduler.so 00:05:06.214 SYMLINK libspdk_event_iobuf.so 00:05:06.472 CC module/event/subsystems/accel/accel.o 00:05:06.730 LIB libspdk_event_accel.a 00:05:06.730 SO libspdk_event_accel.so.6.0 00:05:06.730 SYMLINK libspdk_event_accel.so 00:05:06.987 CC module/event/subsystems/bdev/bdev.o 00:05:07.287 LIB libspdk_event_bdev.a 00:05:07.287 SO libspdk_event_bdev.so.6.0 00:05:07.287 SYMLINK libspdk_event_bdev.so 00:05:07.544 CC module/event/subsystems/ublk/ublk.o 00:05:07.544 CC module/event/subsystems/nbd/nbd.o 00:05:07.544 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:07.544 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:07.544 CC module/event/subsystems/scsi/scsi.o 00:05:07.803 LIB libspdk_event_nbd.a 00:05:07.803 LIB libspdk_event_ublk.a 00:05:07.803 LIB libspdk_event_scsi.a 00:05:07.803 SO libspdk_event_nbd.so.6.0 00:05:07.803 SO libspdk_event_ublk.so.3.0 00:05:07.803 SO libspdk_event_scsi.so.6.0 00:05:07.803 SYMLINK libspdk_event_nbd.so 00:05:07.803 SYMLINK libspdk_event_ublk.so 00:05:07.803 SYMLINK libspdk_event_scsi.so 00:05:08.061 LIB libspdk_event_nvmf.a 00:05:08.061 SO libspdk_event_nvmf.so.6.0 00:05:08.061 SYMLINK libspdk_event_nvmf.so 00:05:08.061 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:08.319 CC module/event/subsystems/iscsi/iscsi.o 00:05:08.319 LIB libspdk_event_vhost_scsi.a 00:05:08.319 SO libspdk_event_vhost_scsi.so.3.0 00:05:08.319 LIB libspdk_event_iscsi.a 00:05:08.319 SO libspdk_event_iscsi.so.6.0 00:05:08.577 SYMLINK libspdk_event_vhost_scsi.so 00:05:08.577 SYMLINK libspdk_event_iscsi.so 00:05:08.577 SO libspdk.so.6.0 00:05:08.577 SYMLINK libspdk.so 00:05:08.835 CXX app/trace/trace.o 00:05:08.835 CC test/rpc_client/rpc_client_test.o 00:05:08.835 TEST_HEADER include/spdk/accel.h 00:05:08.835 TEST_HEADER include/spdk/accel_module.h 00:05:08.836 TEST_HEADER include/spdk/assert.h 00:05:08.836 TEST_HEADER include/spdk/barrier.h 00:05:08.836 TEST_HEADER include/spdk/base64.h 00:05:08.836 TEST_HEADER include/spdk/bdev.h 00:05:08.836 TEST_HEADER include/spdk/bdev_module.h 00:05:09.093 TEST_HEADER include/spdk/bdev_zone.h 00:05:09.093 TEST_HEADER include/spdk/bit_array.h 00:05:09.093 TEST_HEADER include/spdk/bit_pool.h 00:05:09.093 TEST_HEADER include/spdk/blob_bdev.h 00:05:09.093 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:09.093 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:09.093 TEST_HEADER include/spdk/blobfs.h 00:05:09.093 TEST_HEADER include/spdk/blob.h 00:05:09.093 TEST_HEADER include/spdk/conf.h 00:05:09.093 TEST_HEADER include/spdk/config.h 00:05:09.093 TEST_HEADER include/spdk/cpuset.h 00:05:09.093 TEST_HEADER include/spdk/crc16.h 00:05:09.093 TEST_HEADER include/spdk/crc32.h 00:05:09.093 TEST_HEADER include/spdk/crc64.h 00:05:09.093 TEST_HEADER include/spdk/dif.h 00:05:09.093 TEST_HEADER include/spdk/dma.h 00:05:09.093 TEST_HEADER include/spdk/endian.h 00:05:09.093 TEST_HEADER include/spdk/env_dpdk.h 00:05:09.093 TEST_HEADER include/spdk/env.h 00:05:09.093 TEST_HEADER include/spdk/event.h 00:05:09.093 TEST_HEADER include/spdk/fd_group.h 00:05:09.093 TEST_HEADER include/spdk/fd.h 00:05:09.093 TEST_HEADER include/spdk/file.h 00:05:09.093 TEST_HEADER include/spdk/fsdev.h 00:05:09.093 CC test/thread/poller_perf/poller_perf.o 00:05:09.093 CC examples/ioat/perf/perf.o 00:05:09.093 TEST_HEADER include/spdk/fsdev_module.h 00:05:09.093 TEST_HEADER include/spdk/ftl.h 00:05:09.093 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:09.093 TEST_HEADER include/spdk/gpt_spec.h 00:05:09.093 TEST_HEADER include/spdk/hexlify.h 00:05:09.093 TEST_HEADER include/spdk/histogram_data.h 00:05:09.093 TEST_HEADER include/spdk/idxd.h 00:05:09.093 CC examples/util/zipf/zipf.o 00:05:09.093 TEST_HEADER include/spdk/idxd_spec.h 00:05:09.093 TEST_HEADER include/spdk/init.h 00:05:09.093 TEST_HEADER include/spdk/ioat.h 00:05:09.093 TEST_HEADER include/spdk/ioat_spec.h 00:05:09.093 TEST_HEADER include/spdk/iscsi_spec.h 00:05:09.093 TEST_HEADER include/spdk/json.h 00:05:09.093 TEST_HEADER include/spdk/jsonrpc.h 00:05:09.093 TEST_HEADER include/spdk/keyring.h 00:05:09.093 TEST_HEADER include/spdk/keyring_module.h 00:05:09.093 CC test/dma/test_dma/test_dma.o 00:05:09.093 TEST_HEADER include/spdk/likely.h 00:05:09.093 TEST_HEADER include/spdk/log.h 00:05:09.093 TEST_HEADER include/spdk/lvol.h 00:05:09.093 TEST_HEADER include/spdk/md5.h 00:05:09.093 TEST_HEADER include/spdk/memory.h 00:05:09.094 TEST_HEADER include/spdk/mmio.h 00:05:09.094 TEST_HEADER include/spdk/nbd.h 00:05:09.094 TEST_HEADER include/spdk/net.h 00:05:09.094 TEST_HEADER include/spdk/notify.h 00:05:09.094 TEST_HEADER include/spdk/nvme.h 00:05:09.094 TEST_HEADER include/spdk/nvme_intel.h 00:05:09.094 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:09.094 CC test/app/bdev_svc/bdev_svc.o 00:05:09.094 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:09.094 TEST_HEADER include/spdk/nvme_spec.h 00:05:09.094 TEST_HEADER include/spdk/nvme_zns.h 00:05:09.094 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:09.094 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:09.094 TEST_HEADER include/spdk/nvmf.h 00:05:09.094 TEST_HEADER include/spdk/nvmf_spec.h 00:05:09.094 TEST_HEADER include/spdk/nvmf_transport.h 00:05:09.094 TEST_HEADER include/spdk/opal.h 00:05:09.094 TEST_HEADER include/spdk/opal_spec.h 00:05:09.094 TEST_HEADER include/spdk/pci_ids.h 00:05:09.094 TEST_HEADER include/spdk/pipe.h 00:05:09.094 TEST_HEADER include/spdk/queue.h 00:05:09.094 TEST_HEADER include/spdk/reduce.h 00:05:09.094 TEST_HEADER include/spdk/rpc.h 00:05:09.094 TEST_HEADER include/spdk/scheduler.h 00:05:09.094 CC test/env/mem_callbacks/mem_callbacks.o 00:05:09.094 TEST_HEADER include/spdk/scsi.h 00:05:09.094 TEST_HEADER include/spdk/scsi_spec.h 00:05:09.094 TEST_HEADER include/spdk/sock.h 00:05:09.094 TEST_HEADER include/spdk/stdinc.h 00:05:09.094 TEST_HEADER include/spdk/string.h 00:05:09.094 TEST_HEADER include/spdk/thread.h 00:05:09.094 TEST_HEADER include/spdk/trace.h 00:05:09.094 TEST_HEADER include/spdk/trace_parser.h 00:05:09.094 TEST_HEADER include/spdk/tree.h 00:05:09.094 TEST_HEADER include/spdk/ublk.h 00:05:09.094 TEST_HEADER include/spdk/util.h 00:05:09.094 TEST_HEADER include/spdk/uuid.h 00:05:09.094 TEST_HEADER include/spdk/version.h 00:05:09.094 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:09.094 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:09.094 TEST_HEADER include/spdk/vhost.h 00:05:09.094 TEST_HEADER include/spdk/vmd.h 00:05:09.094 LINK rpc_client_test 00:05:09.094 TEST_HEADER include/spdk/xor.h 00:05:09.094 TEST_HEADER include/spdk/zipf.h 00:05:09.094 CXX test/cpp_headers/accel.o 00:05:09.351 LINK interrupt_tgt 00:05:09.351 LINK poller_perf 00:05:09.351 LINK zipf 00:05:09.351 LINK ioat_perf 00:05:09.351 LINK bdev_svc 00:05:09.351 CXX test/cpp_headers/accel_module.o 00:05:09.351 CC test/env/vtophys/vtophys.o 00:05:09.351 LINK spdk_trace 00:05:09.351 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:09.610 CC test/env/memory/memory_ut.o 00:05:09.610 CC examples/ioat/verify/verify.o 00:05:09.610 CXX test/cpp_headers/assert.o 00:05:09.610 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:09.610 LINK vtophys 00:05:09.610 CC test/env/pci/pci_ut.o 00:05:09.610 LINK env_dpdk_post_init 00:05:09.610 LINK test_dma 00:05:09.868 CC app/trace_record/trace_record.o 00:05:09.868 CXX test/cpp_headers/barrier.o 00:05:09.868 LINK verify 00:05:09.868 LINK mem_callbacks 00:05:09.868 CC test/app/histogram_perf/histogram_perf.o 00:05:09.868 CC app/nvmf_tgt/nvmf_main.o 00:05:09.868 CXX test/cpp_headers/base64.o 00:05:10.126 CC app/iscsi_tgt/iscsi_tgt.o 00:05:10.126 LINK spdk_trace_record 00:05:10.126 LINK histogram_perf 00:05:10.126 LINK nvme_fuzz 00:05:10.126 LINK pci_ut 00:05:10.126 CXX test/cpp_headers/bdev.o 00:05:10.126 LINK nvmf_tgt 00:05:10.126 CC app/spdk_tgt/spdk_tgt.o 00:05:10.385 CC examples/thread/thread/thread_ex.o 00:05:10.385 LINK iscsi_tgt 00:05:10.385 LINK spdk_tgt 00:05:10.385 CXX test/cpp_headers/bdev_module.o 00:05:10.385 CC test/event/event_perf/event_perf.o 00:05:10.385 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:10.385 CC test/nvme/aer/aer.o 00:05:10.644 LINK thread 00:05:10.644 LINK event_perf 00:05:10.644 CC test/app/jsoncat/jsoncat.o 00:05:10.644 CXX test/cpp_headers/bdev_zone.o 00:05:10.902 CC test/accel/dif/dif.o 00:05:10.902 CC app/spdk_lspci/spdk_lspci.o 00:05:10.902 LINK jsoncat 00:05:10.902 CC test/blobfs/mkfs/mkfs.o 00:05:10.902 CXX test/cpp_headers/bit_array.o 00:05:10.902 CC test/event/reactor/reactor.o 00:05:10.902 LINK aer 00:05:10.902 LINK spdk_lspci 00:05:11.160 LINK memory_ut 00:05:11.160 CXX test/cpp_headers/bit_pool.o 00:05:11.160 CC examples/sock/hello_world/hello_sock.o 00:05:11.160 LINK mkfs 00:05:11.160 LINK reactor 00:05:11.160 CC test/event/reactor_perf/reactor_perf.o 00:05:11.160 CXX test/cpp_headers/blob_bdev.o 00:05:11.419 CC app/spdk_nvme_perf/perf.o 00:05:11.419 CC test/nvme/reset/reset.o 00:05:11.419 CC test/event/app_repeat/app_repeat.o 00:05:11.419 LINK hello_sock 00:05:11.419 LINK reactor_perf 00:05:11.419 CC test/event/scheduler/scheduler.o 00:05:11.419 CXX test/cpp_headers/blobfs_bdev.o 00:05:11.419 LINK app_repeat 00:05:11.679 CC test/lvol/esnap/esnap.o 00:05:11.679 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:11.679 LINK reset 00:05:11.679 CXX test/cpp_headers/blobfs.o 00:05:11.679 LINK dif 00:05:11.679 LINK scheduler 00:05:11.679 CXX test/cpp_headers/blob.o 00:05:11.679 CC examples/vmd/lsvmd/lsvmd.o 00:05:11.679 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:11.938 CXX test/cpp_headers/conf.o 00:05:11.938 CC test/nvme/sgl/sgl.o 00:05:11.938 LINK lsvmd 00:05:11.938 CC test/app/stub/stub.o 00:05:12.196 CC examples/vmd/led/led.o 00:05:12.196 CC test/nvme/e2edp/nvme_dp.o 00:05:12.196 CXX test/cpp_headers/config.o 00:05:12.196 CXX test/cpp_headers/cpuset.o 00:05:12.196 LINK led 00:05:12.196 CC test/nvme/overhead/overhead.o 00:05:12.455 LINK stub 00:05:12.455 LINK sgl 00:05:12.455 LINK vhost_fuzz 00:05:12.455 CXX test/cpp_headers/crc16.o 00:05:12.455 LINK spdk_nvme_perf 00:05:12.455 LINK nvme_dp 00:05:12.713 CXX test/cpp_headers/crc32.o 00:05:12.713 CC test/nvme/reserve/reserve.o 00:05:12.714 CC test/nvme/startup/startup.o 00:05:12.714 CC test/nvme/err_injection/err_injection.o 00:05:12.714 CC examples/idxd/perf/perf.o 00:05:12.714 LINK overhead 00:05:12.714 CC test/nvme/simple_copy/simple_copy.o 00:05:12.714 CC app/spdk_nvme_identify/identify.o 00:05:12.714 CXX test/cpp_headers/crc64.o 00:05:12.972 LINK iscsi_fuzz 00:05:12.972 LINK startup 00:05:12.972 LINK err_injection 00:05:12.972 LINK reserve 00:05:12.972 CXX test/cpp_headers/dif.o 00:05:12.972 CC test/nvme/connect_stress/connect_stress.o 00:05:12.972 LINK simple_copy 00:05:13.231 LINK idxd_perf 00:05:13.231 CXX test/cpp_headers/dma.o 00:05:13.231 CC app/spdk_nvme_discover/discovery_aer.o 00:05:13.231 LINK connect_stress 00:05:13.231 CC test/nvme/boot_partition/boot_partition.o 00:05:13.231 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:13.231 CC test/nvme/compliance/nvme_compliance.o 00:05:13.489 CXX test/cpp_headers/endian.o 00:05:13.489 CC test/bdev/bdevio/bdevio.o 00:05:13.489 CC app/spdk_top/spdk_top.o 00:05:13.489 LINK spdk_nvme_discover 00:05:13.489 LINK boot_partition 00:05:13.489 CC test/nvme/fused_ordering/fused_ordering.o 00:05:13.489 CXX test/cpp_headers/env_dpdk.o 00:05:13.748 LINK hello_fsdev 00:05:13.748 CC app/vhost/vhost.o 00:05:13.748 LINK spdk_nvme_identify 00:05:13.748 CXX test/cpp_headers/env.o 00:05:13.748 LINK nvme_compliance 00:05:13.748 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:13.748 LINK fused_ordering 00:05:13.748 LINK bdevio 00:05:14.006 CXX test/cpp_headers/event.o 00:05:14.006 LINK vhost 00:05:14.006 LINK doorbell_aers 00:05:14.006 CC examples/accel/perf/accel_perf.o 00:05:14.265 CXX test/cpp_headers/fd_group.o 00:05:14.265 CC app/spdk_dd/spdk_dd.o 00:05:14.265 CC test/nvme/fdp/fdp.o 00:05:14.265 CC examples/nvme/hello_world/hello_world.o 00:05:14.265 CXX test/cpp_headers/fd.o 00:05:14.265 CC examples/blob/hello_world/hello_blob.o 00:05:14.265 CC examples/nvme/reconnect/reconnect.o 00:05:14.523 CXX test/cpp_headers/file.o 00:05:14.523 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:14.523 LINK hello_world 00:05:14.523 LINK hello_blob 00:05:14.523 LINK spdk_top 00:05:14.523 CXX test/cpp_headers/fsdev.o 00:05:14.523 LINK fdp 00:05:14.523 LINK spdk_dd 00:05:14.782 LINK accel_perf 00:05:14.782 CC examples/nvme/arbitration/arbitration.o 00:05:14.782 CXX test/cpp_headers/fsdev_module.o 00:05:14.782 LINK reconnect 00:05:14.782 CC examples/blob/cli/blobcli.o 00:05:14.782 CC examples/nvme/hotplug/hotplug.o 00:05:14.782 CC test/nvme/cuse/cuse.o 00:05:15.041 CXX test/cpp_headers/ftl.o 00:05:15.041 CC app/fio/nvme/fio_plugin.o 00:05:15.041 CC app/fio/bdev/fio_plugin.o 00:05:15.041 LINK nvme_manage 00:05:15.041 CXX test/cpp_headers/fuse_dispatcher.o 00:05:15.300 LINK hotplug 00:05:15.300 LINK arbitration 00:05:15.300 CC examples/bdev/hello_world/hello_bdev.o 00:05:15.300 CXX test/cpp_headers/gpt_spec.o 00:05:15.559 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:15.559 CC examples/nvme/abort/abort.o 00:05:15.559 LINK blobcli 00:05:15.559 CXX test/cpp_headers/hexlify.o 00:05:15.559 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:15.559 LINK hello_bdev 00:05:15.559 LINK cmb_copy 00:05:15.559 CXX test/cpp_headers/histogram_data.o 00:05:15.816 LINK pmr_persistence 00:05:15.817 CXX test/cpp_headers/idxd.o 00:05:15.817 LINK spdk_bdev 00:05:15.817 LINK spdk_nvme 00:05:15.817 CC examples/bdev/bdevperf/bdevperf.o 00:05:15.817 CXX test/cpp_headers/idxd_spec.o 00:05:15.817 CXX test/cpp_headers/init.o 00:05:15.817 CXX test/cpp_headers/ioat.o 00:05:15.817 CXX test/cpp_headers/ioat_spec.o 00:05:15.817 CXX test/cpp_headers/iscsi_spec.o 00:05:16.074 LINK abort 00:05:16.074 CXX test/cpp_headers/json.o 00:05:16.074 CXX test/cpp_headers/jsonrpc.o 00:05:16.074 CXX test/cpp_headers/keyring.o 00:05:16.074 CXX test/cpp_headers/keyring_module.o 00:05:16.074 CXX test/cpp_headers/likely.o 00:05:16.074 CXX test/cpp_headers/log.o 00:05:16.074 CXX test/cpp_headers/lvol.o 00:05:16.074 CXX test/cpp_headers/md5.o 00:05:16.386 CXX test/cpp_headers/memory.o 00:05:16.386 CXX test/cpp_headers/mmio.o 00:05:16.386 CXX test/cpp_headers/nbd.o 00:05:16.386 CXX test/cpp_headers/net.o 00:05:16.386 CXX test/cpp_headers/notify.o 00:05:16.386 CXX test/cpp_headers/nvme.o 00:05:16.386 CXX test/cpp_headers/nvme_intel.o 00:05:16.386 CXX test/cpp_headers/nvme_ocssd.o 00:05:16.386 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:16.644 CXX test/cpp_headers/nvme_spec.o 00:05:16.644 CXX test/cpp_headers/nvme_zns.o 00:05:16.644 CXX test/cpp_headers/nvmf_cmd.o 00:05:16.644 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:16.644 CXX test/cpp_headers/nvmf.o 00:05:16.644 LINK cuse 00:05:16.644 CXX test/cpp_headers/nvmf_spec.o 00:05:16.644 CXX test/cpp_headers/nvmf_transport.o 00:05:16.644 CXX test/cpp_headers/opal.o 00:05:16.644 CXX test/cpp_headers/opal_spec.o 00:05:16.902 CXX test/cpp_headers/pci_ids.o 00:05:16.902 CXX test/cpp_headers/pipe.o 00:05:16.902 CXX test/cpp_headers/queue.o 00:05:16.902 CXX test/cpp_headers/reduce.o 00:05:16.902 CXX test/cpp_headers/rpc.o 00:05:16.902 CXX test/cpp_headers/scheduler.o 00:05:16.902 LINK bdevperf 00:05:16.902 CXX test/cpp_headers/scsi.o 00:05:16.902 CXX test/cpp_headers/scsi_spec.o 00:05:16.902 CXX test/cpp_headers/sock.o 00:05:16.902 CXX test/cpp_headers/stdinc.o 00:05:16.902 CXX test/cpp_headers/string.o 00:05:16.902 CXX test/cpp_headers/thread.o 00:05:17.160 CXX test/cpp_headers/trace.o 00:05:17.160 CXX test/cpp_headers/trace_parser.o 00:05:17.160 CXX test/cpp_headers/tree.o 00:05:17.160 CXX test/cpp_headers/ublk.o 00:05:17.160 CXX test/cpp_headers/util.o 00:05:17.160 CXX test/cpp_headers/uuid.o 00:05:17.160 CXX test/cpp_headers/version.o 00:05:17.160 CXX test/cpp_headers/vfio_user_pci.o 00:05:17.160 CXX test/cpp_headers/vfio_user_spec.o 00:05:17.160 CXX test/cpp_headers/vhost.o 00:05:17.160 CXX test/cpp_headers/vmd.o 00:05:17.160 CXX test/cpp_headers/xor.o 00:05:17.418 CXX test/cpp_headers/zipf.o 00:05:17.418 CC examples/nvmf/nvmf/nvmf.o 00:05:17.677 LINK nvmf 00:05:19.052 LINK esnap 00:05:19.619 00:05:19.619 real 1m35.260s 00:05:19.619 user 9m3.397s 00:05:19.619 sys 1m44.049s 00:05:19.619 20:02:05 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:19.619 20:02:05 make -- common/autotest_common.sh@10 -- $ set +x 00:05:19.619 ************************************ 00:05:19.619 END TEST make 00:05:19.619 ************************************ 00:05:19.619 20:02:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:19.619 20:02:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:19.619 20:02:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:19.619 20:02:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:19.619 20:02:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:19.619 20:02:05 -- pm/common@44 -- $ pid=5278 00:05:19.619 20:02:05 -- pm/common@50 -- $ kill -TERM 5278 00:05:19.619 20:02:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:19.619 20:02:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:19.619 20:02:05 -- pm/common@44 -- $ pid=5280 00:05:19.619 20:02:05 -- pm/common@50 -- $ kill -TERM 5280 00:05:19.877 20:02:05 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.877 20:02:05 -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.877 20:02:05 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.877 20:02:05 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.877 20:02:05 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.877 20:02:05 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.877 20:02:05 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.877 20:02:05 -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.877 20:02:05 -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.877 20:02:05 -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.877 20:02:05 -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.877 20:02:05 -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.877 20:02:05 -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.877 20:02:05 -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.877 20:02:05 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.877 20:02:05 -- scripts/common.sh@344 -- # case "$op" in 00:05:19.877 20:02:05 -- scripts/common.sh@345 -- # : 1 00:05:19.877 20:02:05 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.877 20:02:05 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.877 20:02:05 -- scripts/common.sh@365 -- # decimal 1 00:05:19.877 20:02:05 -- scripts/common.sh@353 -- # local d=1 00:05:19.877 20:02:05 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.877 20:02:05 -- scripts/common.sh@355 -- # echo 1 00:05:19.877 20:02:05 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.877 20:02:05 -- scripts/common.sh@366 -- # decimal 2 00:05:19.877 20:02:05 -- scripts/common.sh@353 -- # local d=2 00:05:19.877 20:02:05 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.877 20:02:05 -- scripts/common.sh@355 -- # echo 2 00:05:19.877 20:02:05 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.877 20:02:05 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.877 20:02:05 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.877 20:02:05 -- scripts/common.sh@368 -- # return 0 00:05:19.877 20:02:05 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.877 20:02:05 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.877 --rc genhtml_branch_coverage=1 00:05:19.877 --rc genhtml_function_coverage=1 00:05:19.877 --rc genhtml_legend=1 00:05:19.877 --rc geninfo_all_blocks=1 00:05:19.877 --rc geninfo_unexecuted_blocks=1 00:05:19.877 00:05:19.877 ' 00:05:19.877 20:02:05 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.877 --rc genhtml_branch_coverage=1 00:05:19.877 --rc genhtml_function_coverage=1 00:05:19.877 --rc genhtml_legend=1 00:05:19.877 --rc geninfo_all_blocks=1 00:05:19.877 --rc geninfo_unexecuted_blocks=1 00:05:19.877 00:05:19.877 ' 00:05:19.877 20:02:05 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.877 --rc genhtml_branch_coverage=1 00:05:19.877 --rc genhtml_function_coverage=1 00:05:19.877 --rc genhtml_legend=1 00:05:19.877 --rc geninfo_all_blocks=1 00:05:19.877 --rc geninfo_unexecuted_blocks=1 00:05:19.877 00:05:19.877 ' 00:05:19.878 20:02:05 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.878 --rc genhtml_branch_coverage=1 00:05:19.878 --rc genhtml_function_coverage=1 00:05:19.878 --rc genhtml_legend=1 00:05:19.878 --rc geninfo_all_blocks=1 00:05:19.878 --rc geninfo_unexecuted_blocks=1 00:05:19.878 00:05:19.878 ' 00:05:19.878 20:02:05 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.878 20:02:05 -- nvmf/common.sh@7 -- # uname -s 00:05:19.878 20:02:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.878 20:02:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.878 20:02:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.878 20:02:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.878 20:02:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.878 20:02:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.878 20:02:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.878 20:02:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.878 20:02:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.878 20:02:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.878 20:02:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b170c76-5239-45ab-b67f-1abff7414b97 00:05:19.878 20:02:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=2b170c76-5239-45ab-b67f-1abff7414b97 00:05:19.878 20:02:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.878 20:02:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.878 20:02:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.878 20:02:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.878 20:02:05 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.878 20:02:05 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.878 20:02:05 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.878 20:02:05 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.878 20:02:05 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.878 20:02:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.878 20:02:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.878 20:02:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.878 20:02:05 -- paths/export.sh@5 -- # export PATH 00:05:19.878 20:02:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.878 20:02:05 -- nvmf/common.sh@51 -- # : 0 00:05:19.878 20:02:05 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.878 20:02:05 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.878 20:02:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.878 20:02:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.878 20:02:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.878 20:02:05 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.878 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.878 20:02:05 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.878 20:02:05 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.878 20:02:05 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.878 20:02:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:19.878 20:02:05 -- spdk/autotest.sh@32 -- # uname -s 00:05:19.878 20:02:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:19.878 20:02:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:19.878 20:02:05 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:19.878 20:02:05 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:19.878 20:02:05 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:19.878 20:02:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:19.878 20:02:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:19.878 20:02:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:19.878 20:02:05 -- spdk/autotest.sh@48 -- # udevadm_pid=54315 00:05:19.878 20:02:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:19.878 20:02:05 -- pm/common@17 -- # local monitor 00:05:19.878 20:02:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:19.878 20:02:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:19.878 20:02:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:19.878 20:02:05 -- pm/common@25 -- # sleep 1 00:05:19.878 20:02:05 -- pm/common@21 -- # date +%s 00:05:19.878 20:02:05 -- pm/common@21 -- # date +%s 00:05:19.878 20:02:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729195325 00:05:19.878 20:02:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729195325 00:05:19.878 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729195325_collect-vmstat.pm.log 00:05:20.136 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729195325_collect-cpu-load.pm.log 00:05:21.071 20:02:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:21.071 20:02:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:21.071 20:02:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:21.071 20:02:06 -- common/autotest_common.sh@10 -- # set +x 00:05:21.071 20:02:06 -- spdk/autotest.sh@59 -- # create_test_list 00:05:21.071 20:02:06 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:21.071 20:02:06 -- common/autotest_common.sh@10 -- # set +x 00:05:21.071 20:02:06 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:21.071 20:02:06 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:21.071 20:02:06 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:21.071 20:02:06 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:21.071 20:02:06 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:21.071 20:02:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:21.071 20:02:06 -- common/autotest_common.sh@1455 -- # uname 00:05:21.071 20:02:06 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:21.071 20:02:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:21.071 20:02:06 -- common/autotest_common.sh@1475 -- # uname 00:05:21.071 20:02:06 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:21.071 20:02:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:21.071 20:02:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:21.071 lcov: LCOV version 1.15 00:05:21.071 20:02:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:39.161 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:39.161 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:57.239 20:02:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:57.239 20:02:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:57.239 20:02:40 -- common/autotest_common.sh@10 -- # set +x 00:05:57.239 20:02:40 -- spdk/autotest.sh@78 -- # rm -f 00:05:57.239 20:02:40 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:57.239 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:57.239 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:57.239 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:57.239 20:02:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:57.239 20:02:40 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:57.239 20:02:40 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:57.239 20:02:40 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:57.239 20:02:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:57.239 20:02:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:57.239 20:02:40 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:57.239 20:02:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:57.239 20:02:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:57.239 20:02:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:57.239 20:02:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:57.239 20:02:40 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:57.239 20:02:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:57.239 20:02:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:57.239 20:02:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:57.239 20:02:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:57.239 20:02:40 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:57.239 20:02:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:57.239 20:02:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:57.239 20:02:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:57.239 20:02:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:57.239 20:02:40 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:57.239 20:02:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:57.239 20:02:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:57.239 20:02:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:57.239 20:02:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:57.239 20:02:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:57.239 20:02:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:57.239 20:02:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:57.239 20:02:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:57.239 No valid GPT data, bailing 00:05:57.239 20:02:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:57.239 20:02:40 -- scripts/common.sh@394 -- # pt= 00:05:57.239 20:02:40 -- scripts/common.sh@395 -- # return 1 00:05:57.239 20:02:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:57.239 1+0 records in 00:05:57.239 1+0 records out 00:05:57.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0052263 s, 201 MB/s 00:05:57.239 20:02:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:57.239 20:02:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:57.239 20:02:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:57.239 20:02:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:57.239 20:02:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:57.239 No valid GPT data, bailing 00:05:57.239 20:02:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:57.239 20:02:40 -- scripts/common.sh@394 -- # pt= 00:05:57.239 20:02:40 -- scripts/common.sh@395 -- # return 1 00:05:57.239 20:02:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:57.239 1+0 records in 00:05:57.239 1+0 records out 00:05:57.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433713 s, 242 MB/s 00:05:57.240 20:02:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:57.240 20:02:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:57.240 20:02:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:57.240 20:02:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:57.240 20:02:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:57.240 No valid GPT data, bailing 00:05:57.240 20:02:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:57.240 20:02:41 -- scripts/common.sh@394 -- # pt= 00:05:57.240 20:02:41 -- scripts/common.sh@395 -- # return 1 00:05:57.240 20:02:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:57.240 1+0 records in 00:05:57.240 1+0 records out 00:05:57.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499438 s, 210 MB/s 00:05:57.240 20:02:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:57.240 20:02:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:57.240 20:02:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:57.240 20:02:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:57.240 20:02:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:57.240 No valid GPT data, bailing 00:05:57.240 20:02:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:57.240 20:02:41 -- scripts/common.sh@394 -- # pt= 00:05:57.240 20:02:41 -- scripts/common.sh@395 -- # return 1 00:05:57.240 20:02:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:57.240 1+0 records in 00:05:57.240 1+0 records out 00:05:57.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501927 s, 209 MB/s 00:05:57.240 20:02:41 -- spdk/autotest.sh@105 -- # sync 00:05:57.240 20:02:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:57.240 20:02:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:57.240 20:02:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:57.498 20:02:43 -- spdk/autotest.sh@111 -- # uname -s 00:05:57.498 20:02:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:57.498 20:02:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:57.498 20:02:43 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:58.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.434 Hugepages 00:05:58.434 node hugesize free / total 00:05:58.434 node0 1048576kB 0 / 0 00:05:58.434 node0 2048kB 0 / 0 00:05:58.434 00:05:58.434 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:58.434 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:58.434 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:58.434 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:58.434 20:02:43 -- spdk/autotest.sh@117 -- # uname -s 00:05:58.434 20:02:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:58.434 20:02:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:58.434 20:02:43 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:59.002 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:59.288 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:59.288 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:59.288 20:02:44 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:00.705 20:02:45 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:00.705 20:02:45 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:00.705 20:02:45 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:00.705 20:02:45 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:00.705 20:02:45 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:00.705 20:02:45 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:00.705 20:02:45 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:00.705 20:02:45 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:00.705 20:02:45 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:00.705 20:02:46 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:00.705 20:02:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:00.705 20:02:46 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:00.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:00.962 Waiting for block devices as requested 00:06:00.962 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:00.962 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:00.962 20:02:46 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:00.962 20:02:46 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:00.962 20:02:46 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:00.962 20:02:46 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:00.962 20:02:46 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:00.962 20:02:46 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:00.962 20:02:46 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:00.962 20:02:46 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:00.962 20:02:46 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:00.962 20:02:46 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:00.962 20:02:46 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:00.962 20:02:46 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:00.962 20:02:46 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:01.219 20:02:46 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:01.219 20:02:46 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:01.219 20:02:46 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:01.219 20:02:46 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:01.219 20:02:46 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:01.219 20:02:46 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:01.219 20:02:46 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:01.219 20:02:46 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:01.219 20:02:46 -- common/autotest_common.sh@1541 -- # continue 00:06:01.219 20:02:46 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:01.219 20:02:46 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:01.219 20:02:46 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:01.219 20:02:46 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:01.219 20:02:46 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:01.219 20:02:46 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:01.219 20:02:46 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:01.219 20:02:46 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:01.219 20:02:46 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:01.219 20:02:46 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:01.219 20:02:46 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:01.219 20:02:46 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:01.219 20:02:46 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:01.219 20:02:46 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:01.219 20:02:46 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:01.219 20:02:46 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:01.219 20:02:46 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:01.219 20:02:46 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:01.219 20:02:46 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:01.219 20:02:46 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:01.219 20:02:46 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:01.219 20:02:46 -- common/autotest_common.sh@1541 -- # continue 00:06:01.219 20:02:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:01.219 20:02:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:01.219 20:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:01.219 20:02:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:01.219 20:02:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:01.219 20:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:01.219 20:02:46 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:01.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:02.042 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.042 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.042 20:02:47 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:02.042 20:02:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:02.042 20:02:47 -- common/autotest_common.sh@10 -- # set +x 00:06:02.042 20:02:47 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:02.042 20:02:47 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:02.042 20:02:47 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:02.042 20:02:47 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:02.042 20:02:47 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:02.042 20:02:47 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:02.042 20:02:47 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:02.042 20:02:47 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:02.042 20:02:47 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:02.042 20:02:47 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:02.042 20:02:47 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:02.042 20:02:47 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:02.042 20:02:47 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:02.042 20:02:47 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:02.042 20:02:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:02.042 20:02:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:02.042 20:02:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:02.300 20:02:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:02.300 20:02:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:02.300 20:02:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:02.300 20:02:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:02.300 20:02:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:02.300 20:02:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:02.300 20:02:47 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:02.300 20:02:47 -- common/autotest_common.sh@1570 -- # return 0 00:06:02.300 20:02:47 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:02.300 20:02:47 -- common/autotest_common.sh@1578 -- # return 0 00:06:02.300 20:02:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:02.300 20:02:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:02.300 20:02:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:02.300 20:02:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:02.300 20:02:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:02.300 20:02:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:02.300 20:02:47 -- common/autotest_common.sh@10 -- # set +x 00:06:02.300 20:02:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:02.300 20:02:47 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:02.300 20:02:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.300 20:02:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.300 20:02:47 -- common/autotest_common.sh@10 -- # set +x 00:06:02.300 ************************************ 00:06:02.300 START TEST env 00:06:02.300 ************************************ 00:06:02.300 20:02:47 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:02.300 * Looking for test storage... 00:06:02.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:02.300 20:02:47 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:02.300 20:02:47 env -- common/autotest_common.sh@1691 -- # lcov --version 00:06:02.300 20:02:47 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:02.300 20:02:47 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:02.300 20:02:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.300 20:02:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.300 20:02:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.300 20:02:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.300 20:02:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.300 20:02:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.300 20:02:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.300 20:02:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.300 20:02:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.300 20:02:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.301 20:02:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.301 20:02:47 env -- scripts/common.sh@344 -- # case "$op" in 00:06:02.301 20:02:47 env -- scripts/common.sh@345 -- # : 1 00:06:02.301 20:02:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.301 20:02:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.301 20:02:47 env -- scripts/common.sh@365 -- # decimal 1 00:06:02.301 20:02:47 env -- scripts/common.sh@353 -- # local d=1 00:06:02.301 20:02:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.301 20:02:47 env -- scripts/common.sh@355 -- # echo 1 00:06:02.301 20:02:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.301 20:02:47 env -- scripts/common.sh@366 -- # decimal 2 00:06:02.301 20:02:47 env -- scripts/common.sh@353 -- # local d=2 00:06:02.301 20:02:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.301 20:02:47 env -- scripts/common.sh@355 -- # echo 2 00:06:02.301 20:02:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.301 20:02:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.301 20:02:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.301 20:02:47 env -- scripts/common.sh@368 -- # return 0 00:06:02.301 20:02:47 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.301 20:02:47 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:02.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.301 --rc genhtml_branch_coverage=1 00:06:02.301 --rc genhtml_function_coverage=1 00:06:02.301 --rc genhtml_legend=1 00:06:02.301 --rc geninfo_all_blocks=1 00:06:02.301 --rc geninfo_unexecuted_blocks=1 00:06:02.301 00:06:02.301 ' 00:06:02.301 20:02:47 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:02.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.301 --rc genhtml_branch_coverage=1 00:06:02.301 --rc genhtml_function_coverage=1 00:06:02.301 --rc genhtml_legend=1 00:06:02.301 --rc geninfo_all_blocks=1 00:06:02.301 --rc geninfo_unexecuted_blocks=1 00:06:02.301 00:06:02.301 ' 00:06:02.301 20:02:47 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:02.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.301 --rc genhtml_branch_coverage=1 00:06:02.301 --rc genhtml_function_coverage=1 00:06:02.301 --rc genhtml_legend=1 00:06:02.301 --rc geninfo_all_blocks=1 00:06:02.301 --rc geninfo_unexecuted_blocks=1 00:06:02.301 00:06:02.301 ' 00:06:02.301 20:02:47 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:02.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.301 --rc genhtml_branch_coverage=1 00:06:02.301 --rc genhtml_function_coverage=1 00:06:02.301 --rc genhtml_legend=1 00:06:02.301 --rc geninfo_all_blocks=1 00:06:02.301 --rc geninfo_unexecuted_blocks=1 00:06:02.301 00:06:02.301 ' 00:06:02.301 20:02:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:02.301 20:02:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.301 20:02:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.301 20:02:47 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.301 ************************************ 00:06:02.301 START TEST env_memory 00:06:02.301 ************************************ 00:06:02.301 20:02:47 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:02.559 00:06:02.559 00:06:02.559 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.559 http://cunit.sourceforge.net/ 00:06:02.559 00:06:02.559 00:06:02.559 Suite: memory 00:06:02.559 Test: alloc and free memory map ...[2024-10-17 20:02:48.019443] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:02.559 passed 00:06:02.559 Test: mem map translation ...[2024-10-17 20:02:48.081330] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:02.559 [2024-10-17 20:02:48.081685] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:02.559 [2024-10-17 20:02:48.082184] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:02.559 [2024-10-17 20:02:48.082435] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:02.559 passed 00:06:02.559 Test: mem map registration ...[2024-10-17 20:02:48.181609] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:02.559 [2024-10-17 20:02:48.182022] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:02.817 passed 00:06:02.817 Test: mem map adjacent registrations ...passed 00:06:02.817 00:06:02.817 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.817 suites 1 1 n/a 0 0 00:06:02.817 tests 4 4 4 0 0 00:06:02.817 asserts 152 152 152 0 n/a 00:06:02.817 00:06:02.817 Elapsed time = 0.320 seconds 00:06:02.817 00:06:02.817 real 0m0.359s 00:06:02.817 user 0m0.322s 00:06:02.817 ************************************ 00:06:02.817 END TEST env_memory 00:06:02.817 ************************************ 00:06:02.817 sys 0m0.026s 00:06:02.817 20:02:48 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.817 20:02:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:02.817 20:02:48 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:02.817 20:02:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.817 20:02:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.817 20:02:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.817 ************************************ 00:06:02.817 START TEST env_vtophys 00:06:02.817 ************************************ 00:06:02.817 20:02:48 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:02.817 EAL: lib.eal log level changed from notice to debug 00:06:02.817 EAL: Detected lcore 0 as core 0 on socket 0 00:06:02.817 EAL: Detected lcore 1 as core 0 on socket 0 00:06:02.817 EAL: Detected lcore 2 as core 0 on socket 0 00:06:02.817 EAL: Detected lcore 3 as core 0 on socket 0 00:06:02.817 EAL: Detected lcore 4 as core 0 on socket 0 00:06:02.817 EAL: Detected lcore 5 as core 0 on socket 0 00:06:02.817 EAL: Detected lcore 6 as core 0 on socket 0 00:06:02.817 EAL: Detected lcore 7 as core 0 on socket 0 00:06:02.817 EAL: Detected lcore 8 as core 0 on socket 0 00:06:02.817 EAL: Detected lcore 9 as core 0 on socket 0 00:06:02.817 EAL: Maximum logical cores by configuration: 128 00:06:02.817 EAL: Detected CPU lcores: 10 00:06:02.817 EAL: Detected NUMA nodes: 1 00:06:02.817 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:02.817 EAL: Detected shared linkage of DPDK 00:06:02.817 EAL: No shared files mode enabled, IPC will be disabled 00:06:03.075 EAL: Selected IOVA mode 'PA' 00:06:03.075 EAL: Probing VFIO support... 00:06:03.075 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:03.075 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:03.075 EAL: Ask a virtual area of 0x2e000 bytes 00:06:03.075 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:03.075 EAL: Setting up physically contiguous memory... 00:06:03.075 EAL: Setting maximum number of open files to 524288 00:06:03.075 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:03.075 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:03.075 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.075 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:03.075 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.075 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.075 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:03.075 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:03.075 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.075 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:03.075 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.075 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.075 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:03.075 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:03.075 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.075 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:03.075 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.075 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.075 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:03.075 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:03.075 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.075 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:03.075 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.075 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.075 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:03.075 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:03.075 EAL: Hugepages will be freed exactly as allocated. 00:06:03.075 EAL: No shared files mode enabled, IPC is disabled 00:06:03.075 EAL: No shared files mode enabled, IPC is disabled 00:06:03.075 EAL: TSC frequency is ~2200000 KHz 00:06:03.075 EAL: Main lcore 0 is ready (tid=7fe65fdbaa40;cpuset=[0]) 00:06:03.075 EAL: Trying to obtain current memory policy. 00:06:03.075 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.075 EAL: Restoring previous memory policy: 0 00:06:03.075 EAL: request: mp_malloc_sync 00:06:03.075 EAL: No shared files mode enabled, IPC is disabled 00:06:03.075 EAL: Heap on socket 0 was expanded by 2MB 00:06:03.075 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:03.075 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:03.075 EAL: Mem event callback 'spdk:(nil)' registered 00:06:03.075 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:03.075 00:06:03.075 00:06:03.075 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.075 http://cunit.sourceforge.net/ 00:06:03.075 00:06:03.075 00:06:03.075 Suite: components_suite 00:06:03.641 Test: vtophys_malloc_test ...passed 00:06:03.641 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:03.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.641 EAL: Restoring previous memory policy: 4 00:06:03.641 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.641 EAL: request: mp_malloc_sync 00:06:03.641 EAL: No shared files mode enabled, IPC is disabled 00:06:03.641 EAL: Heap on socket 0 was expanded by 4MB 00:06:03.641 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.641 EAL: request: mp_malloc_sync 00:06:03.641 EAL: No shared files mode enabled, IPC is disabled 00:06:03.641 EAL: Heap on socket 0 was shrunk by 4MB 00:06:03.641 EAL: Trying to obtain current memory policy. 00:06:03.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.641 EAL: Restoring previous memory policy: 4 00:06:03.641 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.641 EAL: request: mp_malloc_sync 00:06:03.641 EAL: No shared files mode enabled, IPC is disabled 00:06:03.641 EAL: Heap on socket 0 was expanded by 6MB 00:06:03.641 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.641 EAL: request: mp_malloc_sync 00:06:03.641 EAL: No shared files mode enabled, IPC is disabled 00:06:03.641 EAL: Heap on socket 0 was shrunk by 6MB 00:06:03.641 EAL: Trying to obtain current memory policy. 00:06:03.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.641 EAL: Restoring previous memory policy: 4 00:06:03.641 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.641 EAL: request: mp_malloc_sync 00:06:03.642 EAL: No shared files mode enabled, IPC is disabled 00:06:03.642 EAL: Heap on socket 0 was expanded by 10MB 00:06:03.642 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.642 EAL: request: mp_malloc_sync 00:06:03.642 EAL: No shared files mode enabled, IPC is disabled 00:06:03.642 EAL: Heap on socket 0 was shrunk by 10MB 00:06:03.642 EAL: Trying to obtain current memory policy. 00:06:03.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.642 EAL: Restoring previous memory policy: 4 00:06:03.642 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.642 EAL: request: mp_malloc_sync 00:06:03.642 EAL: No shared files mode enabled, IPC is disabled 00:06:03.642 EAL: Heap on socket 0 was expanded by 18MB 00:06:03.642 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.642 EAL: request: mp_malloc_sync 00:06:03.642 EAL: No shared files mode enabled, IPC is disabled 00:06:03.642 EAL: Heap on socket 0 was shrunk by 18MB 00:06:03.642 EAL: Trying to obtain current memory policy. 00:06:03.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.642 EAL: Restoring previous memory policy: 4 00:06:03.642 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.642 EAL: request: mp_malloc_sync 00:06:03.642 EAL: No shared files mode enabled, IPC is disabled 00:06:03.642 EAL: Heap on socket 0 was expanded by 34MB 00:06:03.900 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.900 EAL: request: mp_malloc_sync 00:06:03.900 EAL: No shared files mode enabled, IPC is disabled 00:06:03.900 EAL: Heap on socket 0 was shrunk by 34MB 00:06:03.900 EAL: Trying to obtain current memory policy. 00:06:03.900 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.900 EAL: Restoring previous memory policy: 4 00:06:03.900 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.900 EAL: request: mp_malloc_sync 00:06:03.900 EAL: No shared files mode enabled, IPC is disabled 00:06:03.900 EAL: Heap on socket 0 was expanded by 66MB 00:06:03.900 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.900 EAL: request: mp_malloc_sync 00:06:03.900 EAL: No shared files mode enabled, IPC is disabled 00:06:03.900 EAL: Heap on socket 0 was shrunk by 66MB 00:06:04.158 EAL: Trying to obtain current memory policy. 00:06:04.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.158 EAL: Restoring previous memory policy: 4 00:06:04.158 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.158 EAL: request: mp_malloc_sync 00:06:04.158 EAL: No shared files mode enabled, IPC is disabled 00:06:04.158 EAL: Heap on socket 0 was expanded by 130MB 00:06:04.417 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.417 EAL: request: mp_malloc_sync 00:06:04.417 EAL: No shared files mode enabled, IPC is disabled 00:06:04.417 EAL: Heap on socket 0 was shrunk by 130MB 00:06:04.676 EAL: Trying to obtain current memory policy. 00:06:04.676 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.676 EAL: Restoring previous memory policy: 4 00:06:04.676 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.676 EAL: request: mp_malloc_sync 00:06:04.676 EAL: No shared files mode enabled, IPC is disabled 00:06:04.676 EAL: Heap on socket 0 was expanded by 258MB 00:06:05.243 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.243 EAL: request: mp_malloc_sync 00:06:05.243 EAL: No shared files mode enabled, IPC is disabled 00:06:05.243 EAL: Heap on socket 0 was shrunk by 258MB 00:06:05.502 EAL: Trying to obtain current memory policy. 00:06:05.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.760 EAL: Restoring previous memory policy: 4 00:06:05.760 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.760 EAL: request: mp_malloc_sync 00:06:05.760 EAL: No shared files mode enabled, IPC is disabled 00:06:05.760 EAL: Heap on socket 0 was expanded by 514MB 00:06:06.696 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.954 EAL: request: mp_malloc_sync 00:06:06.954 EAL: No shared files mode enabled, IPC is disabled 00:06:06.954 EAL: Heap on socket 0 was shrunk by 514MB 00:06:07.551 EAL: Trying to obtain current memory policy. 00:06:07.551 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.119 EAL: Restoring previous memory policy: 4 00:06:08.119 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.119 EAL: request: mp_malloc_sync 00:06:08.119 EAL: No shared files mode enabled, IPC is disabled 00:06:08.119 EAL: Heap on socket 0 was expanded by 1026MB 00:06:10.023 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.280 EAL: request: mp_malloc_sync 00:06:10.280 EAL: No shared files mode enabled, IPC is disabled 00:06:10.280 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:11.656 passed 00:06:11.656 00:06:11.656 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.656 suites 1 1 n/a 0 0 00:06:11.656 tests 2 2 2 0 0 00:06:11.656 asserts 5726 5726 5726 0 n/a 00:06:11.656 00:06:11.656 Elapsed time = 8.511 seconds 00:06:11.656 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.656 EAL: request: mp_malloc_sync 00:06:11.656 EAL: No shared files mode enabled, IPC is disabled 00:06:11.656 EAL: Heap on socket 0 was shrunk by 2MB 00:06:11.656 EAL: No shared files mode enabled, IPC is disabled 00:06:11.656 EAL: No shared files mode enabled, IPC is disabled 00:06:11.656 EAL: No shared files mode enabled, IPC is disabled 00:06:11.656 00:06:11.656 real 0m8.879s 00:06:11.656 user 0m7.393s 00:06:11.656 sys 0m1.306s 00:06:11.656 ************************************ 00:06:11.656 END TEST env_vtophys 00:06:11.657 ************************************ 00:06:11.657 20:02:57 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.657 20:02:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:11.657 20:02:57 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:11.657 20:02:57 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.657 20:02:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.657 20:02:57 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.657 ************************************ 00:06:11.657 START TEST env_pci 00:06:11.657 ************************************ 00:06:11.657 20:02:57 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:11.914 00:06:11.914 00:06:11.914 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.914 http://cunit.sourceforge.net/ 00:06:11.914 00:06:11.914 00:06:11.914 Suite: pci 00:06:11.914 Test: pci_hook ...[2024-10-17 20:02:57.340803] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56654 has claimed it 00:06:11.914 passed 00:06:11.914 00:06:11.914 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.914 suites 1 1 n/a 0 0 00:06:11.914 tests 1 1 1 0 0 00:06:11.914 asserts 25 25 25 0 n/a 00:06:11.914 00:06:11.914 Elapsed time = 0.011 seconds 00:06:11.914 EAL: Cannot find device (10000:00:01.0) 00:06:11.914 EAL: Failed to attach device on primary process 00:06:11.914 00:06:11.914 real 0m0.101s 00:06:11.914 user 0m0.036s 00:06:11.914 sys 0m0.063s 00:06:11.914 20:02:57 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.914 ************************************ 00:06:11.914 20:02:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:11.914 END TEST env_pci 00:06:11.914 ************************************ 00:06:11.914 20:02:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:11.914 20:02:57 env -- env/env.sh@15 -- # uname 00:06:11.914 20:02:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:11.914 20:02:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:11.914 20:02:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:11.914 20:02:57 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:11.914 20:02:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.914 20:02:57 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.914 ************************************ 00:06:11.914 START TEST env_dpdk_post_init 00:06:11.914 ************************************ 00:06:11.914 20:02:57 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:11.914 EAL: Detected CPU lcores: 10 00:06:11.914 EAL: Detected NUMA nodes: 1 00:06:11.914 EAL: Detected shared linkage of DPDK 00:06:11.914 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:11.914 EAL: Selected IOVA mode 'PA' 00:06:12.172 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:12.172 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:12.172 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:12.172 Starting DPDK initialization... 00:06:12.172 Starting SPDK post initialization... 00:06:12.172 SPDK NVMe probe 00:06:12.172 Attaching to 0000:00:10.0 00:06:12.172 Attaching to 0000:00:11.0 00:06:12.172 Attached to 0000:00:10.0 00:06:12.172 Attached to 0000:00:11.0 00:06:12.172 Cleaning up... 00:06:12.172 00:06:12.172 real 0m0.311s 00:06:12.172 user 0m0.094s 00:06:12.172 sys 0m0.117s 00:06:12.172 20:02:57 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.172 20:02:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:12.172 ************************************ 00:06:12.172 END TEST env_dpdk_post_init 00:06:12.172 ************************************ 00:06:12.172 20:02:57 env -- env/env.sh@26 -- # uname 00:06:12.172 20:02:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:12.172 20:02:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:12.172 20:02:57 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.172 20:02:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.172 20:02:57 env -- common/autotest_common.sh@10 -- # set +x 00:06:12.172 ************************************ 00:06:12.172 START TEST env_mem_callbacks 00:06:12.172 ************************************ 00:06:12.172 20:02:57 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:12.430 EAL: Detected CPU lcores: 10 00:06:12.430 EAL: Detected NUMA nodes: 1 00:06:12.430 EAL: Detected shared linkage of DPDK 00:06:12.430 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:12.430 EAL: Selected IOVA mode 'PA' 00:06:12.430 00:06:12.430 00:06:12.430 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.430 http://cunit.sourceforge.net/ 00:06:12.430 00:06:12.430 00:06:12.430 Suite: memory 00:06:12.430 Test: test ... 00:06:12.430 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:12.430 register 0x200000200000 2097152 00:06:12.430 malloc 3145728 00:06:12.430 register 0x200000400000 4194304 00:06:12.430 buf 0x2000004fffc0 len 3145728 PASSED 00:06:12.430 malloc 64 00:06:12.430 buf 0x2000004ffec0 len 64 PASSED 00:06:12.430 malloc 4194304 00:06:12.430 register 0x200000800000 6291456 00:06:12.430 buf 0x2000009fffc0 len 4194304 PASSED 00:06:12.430 free 0x2000004fffc0 3145728 00:06:12.430 free 0x2000004ffec0 64 00:06:12.688 unregister 0x200000400000 4194304 PASSED 00:06:12.688 free 0x2000009fffc0 4194304 00:06:12.688 unregister 0x200000800000 6291456 PASSED 00:06:12.688 malloc 8388608 00:06:12.688 register 0x200000400000 10485760 00:06:12.688 buf 0x2000005fffc0 len 8388608 PASSED 00:06:12.688 free 0x2000005fffc0 8388608 00:06:12.688 unregister 0x200000400000 10485760 PASSED 00:06:12.688 passed 00:06:12.688 00:06:12.688 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.688 suites 1 1 n/a 0 0 00:06:12.688 tests 1 1 1 0 0 00:06:12.688 asserts 15 15 15 0 n/a 00:06:12.688 00:06:12.688 Elapsed time = 0.071 seconds 00:06:12.688 00:06:12.688 real 0m0.333s 00:06:12.688 user 0m0.121s 00:06:12.688 sys 0m0.107s 00:06:12.688 20:02:58 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.688 20:02:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:12.688 ************************************ 00:06:12.688 END TEST env_mem_callbacks 00:06:12.688 ************************************ 00:06:12.688 00:06:12.688 real 0m10.475s 00:06:12.688 user 0m8.173s 00:06:12.688 sys 0m1.883s 00:06:12.688 20:02:58 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.688 20:02:58 env -- common/autotest_common.sh@10 -- # set +x 00:06:12.688 ************************************ 00:06:12.688 END TEST env 00:06:12.688 ************************************ 00:06:12.688 20:02:58 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:12.688 20:02:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.688 20:02:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.688 20:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:12.688 ************************************ 00:06:12.688 START TEST rpc 00:06:12.688 ************************************ 00:06:12.688 20:02:58 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:12.688 * Looking for test storage... 00:06:12.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:12.688 20:02:58 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:12.947 20:02:58 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.947 20:02:58 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.947 20:02:58 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.947 20:02:58 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.947 20:02:58 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.947 20:02:58 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.947 20:02:58 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.947 20:02:58 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.947 20:02:58 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.947 20:02:58 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.947 20:02:58 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.947 20:02:58 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:12.947 20:02:58 rpc -- scripts/common.sh@345 -- # : 1 00:06:12.947 20:02:58 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.947 20:02:58 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.947 20:02:58 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:12.947 20:02:58 rpc -- scripts/common.sh@353 -- # local d=1 00:06:12.947 20:02:58 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.947 20:02:58 rpc -- scripts/common.sh@355 -- # echo 1 00:06:12.947 20:02:58 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.947 20:02:58 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:12.947 20:02:58 rpc -- scripts/common.sh@353 -- # local d=2 00:06:12.947 20:02:58 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.947 20:02:58 rpc -- scripts/common.sh@355 -- # echo 2 00:06:12.947 20:02:58 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.947 20:02:58 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.947 20:02:58 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.947 20:02:58 rpc -- scripts/common.sh@368 -- # return 0 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:12.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.947 --rc genhtml_branch_coverage=1 00:06:12.947 --rc genhtml_function_coverage=1 00:06:12.947 --rc genhtml_legend=1 00:06:12.947 --rc geninfo_all_blocks=1 00:06:12.947 --rc geninfo_unexecuted_blocks=1 00:06:12.947 00:06:12.947 ' 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:12.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.947 --rc genhtml_branch_coverage=1 00:06:12.947 --rc genhtml_function_coverage=1 00:06:12.947 --rc genhtml_legend=1 00:06:12.947 --rc geninfo_all_blocks=1 00:06:12.947 --rc geninfo_unexecuted_blocks=1 00:06:12.947 00:06:12.947 ' 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:12.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.947 --rc genhtml_branch_coverage=1 00:06:12.947 --rc genhtml_function_coverage=1 00:06:12.947 --rc genhtml_legend=1 00:06:12.947 --rc geninfo_all_blocks=1 00:06:12.947 --rc geninfo_unexecuted_blocks=1 00:06:12.947 00:06:12.947 ' 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:12.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.947 --rc genhtml_branch_coverage=1 00:06:12.947 --rc genhtml_function_coverage=1 00:06:12.947 --rc genhtml_legend=1 00:06:12.947 --rc geninfo_all_blocks=1 00:06:12.947 --rc geninfo_unexecuted_blocks=1 00:06:12.947 00:06:12.947 ' 00:06:12.947 20:02:58 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56781 00:06:12.947 20:02:58 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.947 20:02:58 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:12.947 20:02:58 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56781 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@831 -- # '[' -z 56781 ']' 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.947 20:02:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.206 [2024-10-17 20:02:58.632722] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:06:13.206 [2024-10-17 20:02:58.633358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56781 ] 00:06:13.206 [2024-10-17 20:02:58.823949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.464 [2024-10-17 20:02:58.998258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:13.464 [2024-10-17 20:02:58.998358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56781' to capture a snapshot of events at runtime. 00:06:13.464 [2024-10-17 20:02:58.998380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.464 [2024-10-17 20:02:58.998400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.464 [2024-10-17 20:02:58.998423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56781 for offline analysis/debug. 00:06:13.464 [2024-10-17 20:02:59.000119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.398 20:03:00 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.398 20:03:00 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:14.398 20:03:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:14.398 20:03:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:14.398 20:03:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:14.398 20:03:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:14.398 20:03:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.398 20:03:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.398 20:03:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.656 ************************************ 00:06:14.656 START TEST rpc_integrity 00:06:14.656 ************************************ 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:14.657 { 00:06:14.657 "name": "Malloc0", 00:06:14.657 "aliases": [ 00:06:14.657 "80f91ed8-121d-4347-99dc-1d94e2d0d659" 00:06:14.657 ], 00:06:14.657 "product_name": "Malloc disk", 00:06:14.657 "block_size": 512, 00:06:14.657 "num_blocks": 16384, 00:06:14.657 "uuid": "80f91ed8-121d-4347-99dc-1d94e2d0d659", 00:06:14.657 "assigned_rate_limits": { 00:06:14.657 "rw_ios_per_sec": 0, 00:06:14.657 "rw_mbytes_per_sec": 0, 00:06:14.657 "r_mbytes_per_sec": 0, 00:06:14.657 "w_mbytes_per_sec": 0 00:06:14.657 }, 00:06:14.657 "claimed": false, 00:06:14.657 "zoned": false, 00:06:14.657 "supported_io_types": { 00:06:14.657 "read": true, 00:06:14.657 "write": true, 00:06:14.657 "unmap": true, 00:06:14.657 "flush": true, 00:06:14.657 "reset": true, 00:06:14.657 "nvme_admin": false, 00:06:14.657 "nvme_io": false, 00:06:14.657 "nvme_io_md": false, 00:06:14.657 "write_zeroes": true, 00:06:14.657 "zcopy": true, 00:06:14.657 "get_zone_info": false, 00:06:14.657 "zone_management": false, 00:06:14.657 "zone_append": false, 00:06:14.657 "compare": false, 00:06:14.657 "compare_and_write": false, 00:06:14.657 "abort": true, 00:06:14.657 "seek_hole": false, 00:06:14.657 "seek_data": false, 00:06:14.657 "copy": true, 00:06:14.657 "nvme_iov_md": false 00:06:14.657 }, 00:06:14.657 "memory_domains": [ 00:06:14.657 { 00:06:14.657 "dma_device_id": "system", 00:06:14.657 "dma_device_type": 1 00:06:14.657 }, 00:06:14.657 { 00:06:14.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.657 "dma_device_type": 2 00:06:14.657 } 00:06:14.657 ], 00:06:14.657 "driver_specific": {} 00:06:14.657 } 00:06:14.657 ]' 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.657 [2024-10-17 20:03:00.242269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:14.657 [2024-10-17 20:03:00.242482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:14.657 [2024-10-17 20:03:00.242536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:14.657 [2024-10-17 20:03:00.242565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:14.657 [2024-10-17 20:03:00.246366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:14.657 [2024-10-17 20:03:00.246420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:14.657 Passthru0 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.657 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:14.657 { 00:06:14.657 "name": "Malloc0", 00:06:14.657 "aliases": [ 00:06:14.657 "80f91ed8-121d-4347-99dc-1d94e2d0d659" 00:06:14.657 ], 00:06:14.657 "product_name": "Malloc disk", 00:06:14.657 "block_size": 512, 00:06:14.657 "num_blocks": 16384, 00:06:14.657 "uuid": "80f91ed8-121d-4347-99dc-1d94e2d0d659", 00:06:14.657 "assigned_rate_limits": { 00:06:14.657 "rw_ios_per_sec": 0, 00:06:14.657 "rw_mbytes_per_sec": 0, 00:06:14.657 "r_mbytes_per_sec": 0, 00:06:14.657 "w_mbytes_per_sec": 0 00:06:14.657 }, 00:06:14.657 "claimed": true, 00:06:14.657 "claim_type": "exclusive_write", 00:06:14.657 "zoned": false, 00:06:14.657 "supported_io_types": { 00:06:14.657 "read": true, 00:06:14.657 "write": true, 00:06:14.657 "unmap": true, 00:06:14.657 "flush": true, 00:06:14.657 "reset": true, 00:06:14.657 "nvme_admin": false, 00:06:14.657 "nvme_io": false, 00:06:14.657 "nvme_io_md": false, 00:06:14.657 "write_zeroes": true, 00:06:14.657 "zcopy": true, 00:06:14.657 "get_zone_info": false, 00:06:14.657 "zone_management": false, 00:06:14.657 "zone_append": false, 00:06:14.657 "compare": false, 00:06:14.657 "compare_and_write": false, 00:06:14.657 "abort": true, 00:06:14.657 "seek_hole": false, 00:06:14.657 "seek_data": false, 00:06:14.657 "copy": true, 00:06:14.657 "nvme_iov_md": false 00:06:14.657 }, 00:06:14.657 "memory_domains": [ 00:06:14.657 { 00:06:14.657 "dma_device_id": "system", 00:06:14.657 "dma_device_type": 1 00:06:14.657 }, 00:06:14.657 { 00:06:14.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.657 "dma_device_type": 2 00:06:14.657 } 00:06:14.657 ], 00:06:14.657 "driver_specific": {} 00:06:14.657 }, 00:06:14.657 { 00:06:14.657 "name": "Passthru0", 00:06:14.657 "aliases": [ 00:06:14.657 "c4d89b56-9d89-509e-b1d2-c267dfa2bac2" 00:06:14.657 ], 00:06:14.657 "product_name": "passthru", 00:06:14.657 "block_size": 512, 00:06:14.657 "num_blocks": 16384, 00:06:14.657 "uuid": "c4d89b56-9d89-509e-b1d2-c267dfa2bac2", 00:06:14.657 "assigned_rate_limits": { 00:06:14.657 "rw_ios_per_sec": 0, 00:06:14.657 "rw_mbytes_per_sec": 0, 00:06:14.657 "r_mbytes_per_sec": 0, 00:06:14.657 "w_mbytes_per_sec": 0 00:06:14.657 }, 00:06:14.657 "claimed": false, 00:06:14.657 "zoned": false, 00:06:14.657 "supported_io_types": { 00:06:14.657 "read": true, 00:06:14.657 "write": true, 00:06:14.657 "unmap": true, 00:06:14.657 "flush": true, 00:06:14.657 "reset": true, 00:06:14.657 "nvme_admin": false, 00:06:14.657 "nvme_io": false, 00:06:14.657 "nvme_io_md": false, 00:06:14.657 "write_zeroes": true, 00:06:14.657 "zcopy": true, 00:06:14.657 "get_zone_info": false, 00:06:14.657 "zone_management": false, 00:06:14.657 "zone_append": false, 00:06:14.657 "compare": false, 00:06:14.657 "compare_and_write": false, 00:06:14.657 "abort": true, 00:06:14.657 "seek_hole": false, 00:06:14.657 "seek_data": false, 00:06:14.657 "copy": true, 00:06:14.657 "nvme_iov_md": false 00:06:14.657 }, 00:06:14.657 "memory_domains": [ 00:06:14.657 { 00:06:14.657 "dma_device_id": "system", 00:06:14.657 "dma_device_type": 1 00:06:14.657 }, 00:06:14.657 { 00:06:14.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.657 "dma_device_type": 2 00:06:14.657 } 00:06:14.657 ], 00:06:14.657 "driver_specific": { 00:06:14.657 "passthru": { 00:06:14.657 "name": "Passthru0", 00:06:14.657 "base_bdev_name": "Malloc0" 00:06:14.657 } 00:06:14.657 } 00:06:14.657 } 00:06:14.657 ]' 00:06:14.657 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:14.916 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:14.916 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:14.916 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.916 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.916 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.916 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:14.916 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.916 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.916 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.916 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:14.916 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.916 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.916 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.916 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:14.916 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:14.916 ************************************ 00:06:14.916 END TEST rpc_integrity 00:06:14.916 ************************************ 00:06:14.916 20:03:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:14.916 00:06:14.916 real 0m0.375s 00:06:14.916 user 0m0.229s 00:06:14.916 sys 0m0.043s 00:06:14.916 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.916 20:03:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.916 20:03:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:14.916 20:03:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.916 20:03:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.916 20:03:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.916 ************************************ 00:06:14.916 START TEST rpc_plugins 00:06:14.916 ************************************ 00:06:14.916 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:14.916 20:03:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:14.916 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.916 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:14.916 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.916 20:03:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:14.916 20:03:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:14.916 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.916 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:14.916 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.916 20:03:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:14.916 { 00:06:14.916 "name": "Malloc1", 00:06:14.916 "aliases": [ 00:06:14.916 "e49fbab9-d76b-435c-a258-de5ee9ffd2c4" 00:06:14.916 ], 00:06:14.916 "product_name": "Malloc disk", 00:06:14.916 "block_size": 4096, 00:06:14.916 "num_blocks": 256, 00:06:14.916 "uuid": "e49fbab9-d76b-435c-a258-de5ee9ffd2c4", 00:06:14.916 "assigned_rate_limits": { 00:06:14.916 "rw_ios_per_sec": 0, 00:06:14.916 "rw_mbytes_per_sec": 0, 00:06:14.916 "r_mbytes_per_sec": 0, 00:06:14.916 "w_mbytes_per_sec": 0 00:06:14.916 }, 00:06:14.916 "claimed": false, 00:06:14.916 "zoned": false, 00:06:14.916 "supported_io_types": { 00:06:14.916 "read": true, 00:06:14.916 "write": true, 00:06:14.916 "unmap": true, 00:06:14.916 "flush": true, 00:06:14.916 "reset": true, 00:06:14.916 "nvme_admin": false, 00:06:14.916 "nvme_io": false, 00:06:14.916 "nvme_io_md": false, 00:06:14.916 "write_zeroes": true, 00:06:14.916 "zcopy": true, 00:06:14.916 "get_zone_info": false, 00:06:14.916 "zone_management": false, 00:06:14.916 "zone_append": false, 00:06:14.916 "compare": false, 00:06:14.916 "compare_and_write": false, 00:06:14.916 "abort": true, 00:06:14.916 "seek_hole": false, 00:06:14.916 "seek_data": false, 00:06:14.916 "copy": true, 00:06:14.916 "nvme_iov_md": false 00:06:14.916 }, 00:06:14.916 "memory_domains": [ 00:06:14.916 { 00:06:14.916 "dma_device_id": "system", 00:06:14.916 "dma_device_type": 1 00:06:14.916 }, 00:06:14.916 { 00:06:14.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.916 "dma_device_type": 2 00:06:14.916 } 00:06:14.916 ], 00:06:14.916 "driver_specific": {} 00:06:14.916 } 00:06:14.916 ]' 00:06:14.916 20:03:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:15.175 20:03:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:15.175 20:03:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:15.175 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.175 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.175 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.175 20:03:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:15.175 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.175 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.175 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.175 20:03:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:15.175 20:03:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:15.175 ************************************ 00:06:15.175 END TEST rpc_plugins 00:06:15.175 ************************************ 00:06:15.175 20:03:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:15.175 00:06:15.175 real 0m0.176s 00:06:15.175 user 0m0.113s 00:06:15.175 sys 0m0.020s 00:06:15.175 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.175 20:03:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.175 20:03:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:15.175 20:03:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.175 20:03:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.175 20:03:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.175 ************************************ 00:06:15.175 START TEST rpc_trace_cmd_test 00:06:15.175 ************************************ 00:06:15.175 20:03:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:15.175 20:03:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:15.175 20:03:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:15.175 20:03:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.175 20:03:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.175 20:03:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.175 20:03:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:15.175 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56781", 00:06:15.175 "tpoint_group_mask": "0x8", 00:06:15.175 "iscsi_conn": { 00:06:15.175 "mask": "0x2", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "scsi": { 00:06:15.175 "mask": "0x4", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "bdev": { 00:06:15.175 "mask": "0x8", 00:06:15.175 "tpoint_mask": "0xffffffffffffffff" 00:06:15.175 }, 00:06:15.175 "nvmf_rdma": { 00:06:15.175 "mask": "0x10", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "nvmf_tcp": { 00:06:15.175 "mask": "0x20", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "ftl": { 00:06:15.175 "mask": "0x40", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "blobfs": { 00:06:15.175 "mask": "0x80", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "dsa": { 00:06:15.175 "mask": "0x200", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "thread": { 00:06:15.175 "mask": "0x400", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "nvme_pcie": { 00:06:15.175 "mask": "0x800", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "iaa": { 00:06:15.175 "mask": "0x1000", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "nvme_tcp": { 00:06:15.175 "mask": "0x2000", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "bdev_nvme": { 00:06:15.175 "mask": "0x4000", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "sock": { 00:06:15.175 "mask": "0x8000", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "blob": { 00:06:15.175 "mask": "0x10000", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "bdev_raid": { 00:06:15.175 "mask": "0x20000", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 }, 00:06:15.175 "scheduler": { 00:06:15.175 "mask": "0x40000", 00:06:15.175 "tpoint_mask": "0x0" 00:06:15.175 } 00:06:15.175 }' 00:06:15.175 20:03:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:15.175 20:03:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:15.175 20:03:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:15.434 20:03:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:15.434 20:03:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:15.434 20:03:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:15.434 20:03:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:15.434 20:03:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:15.434 20:03:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:15.434 ************************************ 00:06:15.434 END TEST rpc_trace_cmd_test 00:06:15.434 ************************************ 00:06:15.434 20:03:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:15.434 00:06:15.434 real 0m0.266s 00:06:15.434 user 0m0.230s 00:06:15.434 sys 0m0.027s 00:06:15.434 20:03:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.434 20:03:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.434 20:03:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:15.434 20:03:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:15.434 20:03:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:15.434 20:03:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.434 20:03:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.434 20:03:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.434 ************************************ 00:06:15.434 START TEST rpc_daemon_integrity 00:06:15.434 ************************************ 00:06:15.434 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:15.434 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:15.434 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.434 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.434 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.434 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:15.434 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:15.692 { 00:06:15.692 "name": "Malloc2", 00:06:15.692 "aliases": [ 00:06:15.692 "6fdea2b5-386a-4ddd-b401-252fbea83a09" 00:06:15.692 ], 00:06:15.692 "product_name": "Malloc disk", 00:06:15.692 "block_size": 512, 00:06:15.692 "num_blocks": 16384, 00:06:15.692 "uuid": "6fdea2b5-386a-4ddd-b401-252fbea83a09", 00:06:15.692 "assigned_rate_limits": { 00:06:15.692 "rw_ios_per_sec": 0, 00:06:15.692 "rw_mbytes_per_sec": 0, 00:06:15.692 "r_mbytes_per_sec": 0, 00:06:15.692 "w_mbytes_per_sec": 0 00:06:15.692 }, 00:06:15.692 "claimed": false, 00:06:15.692 "zoned": false, 00:06:15.692 "supported_io_types": { 00:06:15.692 "read": true, 00:06:15.692 "write": true, 00:06:15.692 "unmap": true, 00:06:15.692 "flush": true, 00:06:15.692 "reset": true, 00:06:15.692 "nvme_admin": false, 00:06:15.692 "nvme_io": false, 00:06:15.692 "nvme_io_md": false, 00:06:15.692 "write_zeroes": true, 00:06:15.692 "zcopy": true, 00:06:15.692 "get_zone_info": false, 00:06:15.692 "zone_management": false, 00:06:15.692 "zone_append": false, 00:06:15.692 "compare": false, 00:06:15.692 "compare_and_write": false, 00:06:15.692 "abort": true, 00:06:15.692 "seek_hole": false, 00:06:15.692 "seek_data": false, 00:06:15.692 "copy": true, 00:06:15.692 "nvme_iov_md": false 00:06:15.692 }, 00:06:15.692 "memory_domains": [ 00:06:15.692 { 00:06:15.692 "dma_device_id": "system", 00:06:15.692 "dma_device_type": 1 00:06:15.692 }, 00:06:15.692 { 00:06:15.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.692 "dma_device_type": 2 00:06:15.692 } 00:06:15.692 ], 00:06:15.692 "driver_specific": {} 00:06:15.692 } 00:06:15.692 ]' 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.692 [2024-10-17 20:03:01.216634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:15.692 [2024-10-17 20:03:01.216883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:15.692 [2024-10-17 20:03:01.216933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:15.692 [2024-10-17 20:03:01.216954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:15.692 [2024-10-17 20:03:01.220468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:15.692 [2024-10-17 20:03:01.220517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:15.692 Passthru0 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.692 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:15.692 { 00:06:15.692 "name": "Malloc2", 00:06:15.692 "aliases": [ 00:06:15.692 "6fdea2b5-386a-4ddd-b401-252fbea83a09" 00:06:15.692 ], 00:06:15.692 "product_name": "Malloc disk", 00:06:15.692 "block_size": 512, 00:06:15.693 "num_blocks": 16384, 00:06:15.693 "uuid": "6fdea2b5-386a-4ddd-b401-252fbea83a09", 00:06:15.693 "assigned_rate_limits": { 00:06:15.693 "rw_ios_per_sec": 0, 00:06:15.693 "rw_mbytes_per_sec": 0, 00:06:15.693 "r_mbytes_per_sec": 0, 00:06:15.693 "w_mbytes_per_sec": 0 00:06:15.693 }, 00:06:15.693 "claimed": true, 00:06:15.693 "claim_type": "exclusive_write", 00:06:15.693 "zoned": false, 00:06:15.693 "supported_io_types": { 00:06:15.693 "read": true, 00:06:15.693 "write": true, 00:06:15.693 "unmap": true, 00:06:15.693 "flush": true, 00:06:15.693 "reset": true, 00:06:15.693 "nvme_admin": false, 00:06:15.693 "nvme_io": false, 00:06:15.693 "nvme_io_md": false, 00:06:15.693 "write_zeroes": true, 00:06:15.693 "zcopy": true, 00:06:15.693 "get_zone_info": false, 00:06:15.693 "zone_management": false, 00:06:15.693 "zone_append": false, 00:06:15.693 "compare": false, 00:06:15.693 "compare_and_write": false, 00:06:15.693 "abort": true, 00:06:15.693 "seek_hole": false, 00:06:15.693 "seek_data": false, 00:06:15.693 "copy": true, 00:06:15.693 "nvme_iov_md": false 00:06:15.693 }, 00:06:15.693 "memory_domains": [ 00:06:15.693 { 00:06:15.693 "dma_device_id": "system", 00:06:15.693 "dma_device_type": 1 00:06:15.693 }, 00:06:15.693 { 00:06:15.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.693 "dma_device_type": 2 00:06:15.693 } 00:06:15.693 ], 00:06:15.693 "driver_specific": {} 00:06:15.693 }, 00:06:15.693 { 00:06:15.693 "name": "Passthru0", 00:06:15.693 "aliases": [ 00:06:15.693 "7fd83bae-d467-5d92-9394-286b6b477d61" 00:06:15.693 ], 00:06:15.693 "product_name": "passthru", 00:06:15.693 "block_size": 512, 00:06:15.693 "num_blocks": 16384, 00:06:15.693 "uuid": "7fd83bae-d467-5d92-9394-286b6b477d61", 00:06:15.693 "assigned_rate_limits": { 00:06:15.693 "rw_ios_per_sec": 0, 00:06:15.693 "rw_mbytes_per_sec": 0, 00:06:15.693 "r_mbytes_per_sec": 0, 00:06:15.693 "w_mbytes_per_sec": 0 00:06:15.693 }, 00:06:15.693 "claimed": false, 00:06:15.693 "zoned": false, 00:06:15.693 "supported_io_types": { 00:06:15.693 "read": true, 00:06:15.693 "write": true, 00:06:15.693 "unmap": true, 00:06:15.693 "flush": true, 00:06:15.693 "reset": true, 00:06:15.693 "nvme_admin": false, 00:06:15.693 "nvme_io": false, 00:06:15.693 "nvme_io_md": false, 00:06:15.693 "write_zeroes": true, 00:06:15.693 "zcopy": true, 00:06:15.693 "get_zone_info": false, 00:06:15.693 "zone_management": false, 00:06:15.693 "zone_append": false, 00:06:15.693 "compare": false, 00:06:15.693 "compare_and_write": false, 00:06:15.693 "abort": true, 00:06:15.693 "seek_hole": false, 00:06:15.693 "seek_data": false, 00:06:15.693 "copy": true, 00:06:15.693 "nvme_iov_md": false 00:06:15.693 }, 00:06:15.693 "memory_domains": [ 00:06:15.693 { 00:06:15.693 "dma_device_id": "system", 00:06:15.693 "dma_device_type": 1 00:06:15.693 }, 00:06:15.693 { 00:06:15.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.693 "dma_device_type": 2 00:06:15.693 } 00:06:15.693 ], 00:06:15.693 "driver_specific": { 00:06:15.693 "passthru": { 00:06:15.693 "name": "Passthru0", 00:06:15.693 "base_bdev_name": "Malloc2" 00:06:15.693 } 00:06:15.693 } 00:06:15.693 } 00:06:15.693 ]' 00:06:15.693 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:15.693 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:15.693 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:15.693 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.693 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.693 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.693 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:15.693 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.693 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.951 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.951 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:15.951 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.951 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.951 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.951 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:15.951 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:15.951 20:03:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:15.951 00:06:15.951 real 0m0.363s 00:06:15.951 user 0m0.216s 00:06:15.951 sys 0m0.046s 00:06:15.951 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.951 20:03:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.951 ************************************ 00:06:15.951 END TEST rpc_daemon_integrity 00:06:15.951 ************************************ 00:06:15.951 20:03:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:15.951 20:03:01 rpc -- rpc/rpc.sh@84 -- # killprocess 56781 00:06:15.951 20:03:01 rpc -- common/autotest_common.sh@950 -- # '[' -z 56781 ']' 00:06:15.951 20:03:01 rpc -- common/autotest_common.sh@954 -- # kill -0 56781 00:06:15.951 20:03:01 rpc -- common/autotest_common.sh@955 -- # uname 00:06:15.951 20:03:01 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.951 20:03:01 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56781 00:06:15.951 killing process with pid 56781 00:06:15.951 20:03:01 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.951 20:03:01 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.951 20:03:01 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56781' 00:06:15.951 20:03:01 rpc -- common/autotest_common.sh@969 -- # kill 56781 00:06:15.952 20:03:01 rpc -- common/autotest_common.sh@974 -- # wait 56781 00:06:19.235 00:06:19.235 real 0m5.901s 00:06:19.235 user 0m6.480s 00:06:19.235 sys 0m1.123s 00:06:19.235 20:03:04 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.235 20:03:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.235 ************************************ 00:06:19.235 END TEST rpc 00:06:19.235 ************************************ 00:06:19.235 20:03:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:19.235 20:03:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.235 20:03:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.235 20:03:04 -- common/autotest_common.sh@10 -- # set +x 00:06:19.235 ************************************ 00:06:19.235 START TEST skip_rpc 00:06:19.235 ************************************ 00:06:19.235 20:03:04 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:19.235 * Looking for test storage... 00:06:19.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:19.235 20:03:04 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:19.235 20:03:04 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:19.235 20:03:04 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:19.235 20:03:04 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.235 20:03:04 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:19.235 20:03:04 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.235 20:03:04 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:19.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.235 --rc genhtml_branch_coverage=1 00:06:19.235 --rc genhtml_function_coverage=1 00:06:19.235 --rc genhtml_legend=1 00:06:19.235 --rc geninfo_all_blocks=1 00:06:19.235 --rc geninfo_unexecuted_blocks=1 00:06:19.235 00:06:19.235 ' 00:06:19.235 20:03:04 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:19.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.235 --rc genhtml_branch_coverage=1 00:06:19.235 --rc genhtml_function_coverage=1 00:06:19.235 --rc genhtml_legend=1 00:06:19.235 --rc geninfo_all_blocks=1 00:06:19.235 --rc geninfo_unexecuted_blocks=1 00:06:19.235 00:06:19.235 ' 00:06:19.235 20:03:04 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:19.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.235 --rc genhtml_branch_coverage=1 00:06:19.235 --rc genhtml_function_coverage=1 00:06:19.235 --rc genhtml_legend=1 00:06:19.236 --rc geninfo_all_blocks=1 00:06:19.236 --rc geninfo_unexecuted_blocks=1 00:06:19.236 00:06:19.236 ' 00:06:19.236 20:03:04 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:19.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.236 --rc genhtml_branch_coverage=1 00:06:19.236 --rc genhtml_function_coverage=1 00:06:19.236 --rc genhtml_legend=1 00:06:19.236 --rc geninfo_all_blocks=1 00:06:19.236 --rc geninfo_unexecuted_blocks=1 00:06:19.236 00:06:19.236 ' 00:06:19.236 20:03:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:19.236 20:03:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:19.236 20:03:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:19.236 20:03:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.236 20:03:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.236 20:03:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.236 ************************************ 00:06:19.236 START TEST skip_rpc 00:06:19.236 ************************************ 00:06:19.236 20:03:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:19.236 20:03:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57019 00:06:19.236 20:03:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.236 20:03:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:19.236 20:03:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:19.236 [2024-10-17 20:03:04.577686] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:06:19.236 [2024-10-17 20:03:04.578306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57019 ] 00:06:19.236 [2024-10-17 20:03:04.757602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.494 [2024-10-17 20:03:04.891395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57019 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57019 ']' 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57019 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57019 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.759 killing process with pid 57019 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57019' 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57019 00:06:24.759 20:03:09 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57019 00:06:26.658 00:06:26.658 real 0m7.411s 00:06:26.658 user 0m6.804s 00:06:26.658 sys 0m0.506s 00:06:26.658 20:03:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.658 ************************************ 00:06:26.658 END TEST skip_rpc 00:06:26.658 ************************************ 00:06:26.658 20:03:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.658 20:03:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:26.658 20:03:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.658 20:03:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.658 20:03:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.658 ************************************ 00:06:26.658 START TEST skip_rpc_with_json 00:06:26.658 ************************************ 00:06:26.658 20:03:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:26.658 20:03:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:26.658 20:03:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57125 00:06:26.658 20:03:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.658 20:03:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.658 20:03:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57125 00:06:26.658 20:03:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57125 ']' 00:06:26.658 20:03:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.658 20:03:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.658 20:03:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.658 20:03:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.658 20:03:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:26.658 [2024-10-17 20:03:12.043170] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:06:26.658 [2024-10-17 20:03:12.043374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57125 ] 00:06:26.658 [2024-10-17 20:03:12.226983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.916 [2024-10-17 20:03:12.388889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.849 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.849 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:27.849 20:03:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:27.849 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.849 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.849 [2024-10-17 20:03:13.316035] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:27.849 request: 00:06:27.849 { 00:06:27.849 "trtype": "tcp", 00:06:27.849 "method": "nvmf_get_transports", 00:06:27.849 "req_id": 1 00:06:27.849 } 00:06:27.849 Got JSON-RPC error response 00:06:27.849 response: 00:06:27.849 { 00:06:27.849 "code": -19, 00:06:27.849 "message": "No such device" 00:06:27.849 } 00:06:27.849 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:27.849 20:03:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:27.849 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.849 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.849 [2024-10-17 20:03:13.328235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.849 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.849 20:03:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:27.849 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.849 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:28.107 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.107 20:03:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:28.107 { 00:06:28.107 "subsystems": [ 00:06:28.107 { 00:06:28.107 "subsystem": "fsdev", 00:06:28.107 "config": [ 00:06:28.107 { 00:06:28.107 "method": "fsdev_set_opts", 00:06:28.107 "params": { 00:06:28.107 "fsdev_io_pool_size": 65535, 00:06:28.107 "fsdev_io_cache_size": 256 00:06:28.107 } 00:06:28.107 } 00:06:28.107 ] 00:06:28.107 }, 00:06:28.107 { 00:06:28.107 "subsystem": "keyring", 00:06:28.107 "config": [] 00:06:28.107 }, 00:06:28.107 { 00:06:28.107 "subsystem": "iobuf", 00:06:28.107 "config": [ 00:06:28.107 { 00:06:28.107 "method": "iobuf_set_options", 00:06:28.107 "params": { 00:06:28.107 "small_pool_count": 8192, 00:06:28.107 "large_pool_count": 1024, 00:06:28.107 "small_bufsize": 8192, 00:06:28.107 "large_bufsize": 135168 00:06:28.107 } 00:06:28.107 } 00:06:28.107 ] 00:06:28.107 }, 00:06:28.107 { 00:06:28.107 "subsystem": "sock", 00:06:28.107 "config": [ 00:06:28.107 { 00:06:28.107 "method": "sock_set_default_impl", 00:06:28.107 "params": { 00:06:28.107 "impl_name": "posix" 00:06:28.107 } 00:06:28.107 }, 00:06:28.107 { 00:06:28.107 "method": "sock_impl_set_options", 00:06:28.107 "params": { 00:06:28.107 "impl_name": "ssl", 00:06:28.107 "recv_buf_size": 4096, 00:06:28.107 "send_buf_size": 4096, 00:06:28.107 "enable_recv_pipe": true, 00:06:28.107 "enable_quickack": false, 00:06:28.108 "enable_placement_id": 0, 00:06:28.108 "enable_zerocopy_send_server": true, 00:06:28.108 "enable_zerocopy_send_client": false, 00:06:28.108 "zerocopy_threshold": 0, 00:06:28.108 "tls_version": 0, 00:06:28.108 "enable_ktls": false 00:06:28.108 } 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "method": "sock_impl_set_options", 00:06:28.108 "params": { 00:06:28.108 "impl_name": "posix", 00:06:28.108 "recv_buf_size": 2097152, 00:06:28.108 "send_buf_size": 2097152, 00:06:28.108 "enable_recv_pipe": true, 00:06:28.108 "enable_quickack": false, 00:06:28.108 "enable_placement_id": 0, 00:06:28.108 "enable_zerocopy_send_server": true, 00:06:28.108 "enable_zerocopy_send_client": false, 00:06:28.108 "zerocopy_threshold": 0, 00:06:28.108 "tls_version": 0, 00:06:28.108 "enable_ktls": false 00:06:28.108 } 00:06:28.108 } 00:06:28.108 ] 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "subsystem": "vmd", 00:06:28.108 "config": [] 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "subsystem": "accel", 00:06:28.108 "config": [ 00:06:28.108 { 00:06:28.108 "method": "accel_set_options", 00:06:28.108 "params": { 00:06:28.108 "small_cache_size": 128, 00:06:28.108 "large_cache_size": 16, 00:06:28.108 "task_count": 2048, 00:06:28.108 "sequence_count": 2048, 00:06:28.108 "buf_count": 2048 00:06:28.108 } 00:06:28.108 } 00:06:28.108 ] 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "subsystem": "bdev", 00:06:28.108 "config": [ 00:06:28.108 { 00:06:28.108 "method": "bdev_set_options", 00:06:28.108 "params": { 00:06:28.108 "bdev_io_pool_size": 65535, 00:06:28.108 "bdev_io_cache_size": 256, 00:06:28.108 "bdev_auto_examine": true, 00:06:28.108 "iobuf_small_cache_size": 128, 00:06:28.108 "iobuf_large_cache_size": 16 00:06:28.108 } 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "method": "bdev_raid_set_options", 00:06:28.108 "params": { 00:06:28.108 "process_window_size_kb": 1024, 00:06:28.108 "process_max_bandwidth_mb_sec": 0 00:06:28.108 } 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "method": "bdev_iscsi_set_options", 00:06:28.108 "params": { 00:06:28.108 "timeout_sec": 30 00:06:28.108 } 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "method": "bdev_nvme_set_options", 00:06:28.108 "params": { 00:06:28.108 "action_on_timeout": "none", 00:06:28.108 "timeout_us": 0, 00:06:28.108 "timeout_admin_us": 0, 00:06:28.108 "keep_alive_timeout_ms": 10000, 00:06:28.108 "arbitration_burst": 0, 00:06:28.108 "low_priority_weight": 0, 00:06:28.108 "medium_priority_weight": 0, 00:06:28.108 "high_priority_weight": 0, 00:06:28.108 "nvme_adminq_poll_period_us": 10000, 00:06:28.108 "nvme_ioq_poll_period_us": 0, 00:06:28.108 "io_queue_requests": 0, 00:06:28.108 "delay_cmd_submit": true, 00:06:28.108 "transport_retry_count": 4, 00:06:28.108 "bdev_retry_count": 3, 00:06:28.108 "transport_ack_timeout": 0, 00:06:28.108 "ctrlr_loss_timeout_sec": 0, 00:06:28.108 "reconnect_delay_sec": 0, 00:06:28.108 "fast_io_fail_timeout_sec": 0, 00:06:28.108 "disable_auto_failback": false, 00:06:28.108 "generate_uuids": false, 00:06:28.108 "transport_tos": 0, 00:06:28.108 "nvme_error_stat": false, 00:06:28.108 "rdma_srq_size": 0, 00:06:28.108 "io_path_stat": false, 00:06:28.108 "allow_accel_sequence": false, 00:06:28.108 "rdma_max_cq_size": 0, 00:06:28.108 "rdma_cm_event_timeout_ms": 0, 00:06:28.108 "dhchap_digests": [ 00:06:28.108 "sha256", 00:06:28.108 "sha384", 00:06:28.108 "sha512" 00:06:28.108 ], 00:06:28.108 "dhchap_dhgroups": [ 00:06:28.108 "null", 00:06:28.108 "ffdhe2048", 00:06:28.108 "ffdhe3072", 00:06:28.108 "ffdhe4096", 00:06:28.108 "ffdhe6144", 00:06:28.108 "ffdhe8192" 00:06:28.108 ] 00:06:28.108 } 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "method": "bdev_nvme_set_hotplug", 00:06:28.108 "params": { 00:06:28.108 "period_us": 100000, 00:06:28.108 "enable": false 00:06:28.108 } 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "method": "bdev_wait_for_examine" 00:06:28.108 } 00:06:28.108 ] 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "subsystem": "scsi", 00:06:28.108 "config": null 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "subsystem": "scheduler", 00:06:28.108 "config": [ 00:06:28.108 { 00:06:28.108 "method": "framework_set_scheduler", 00:06:28.108 "params": { 00:06:28.108 "name": "static" 00:06:28.108 } 00:06:28.108 } 00:06:28.108 ] 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "subsystem": "vhost_scsi", 00:06:28.108 "config": [] 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "subsystem": "vhost_blk", 00:06:28.108 "config": [] 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "subsystem": "ublk", 00:06:28.108 "config": [] 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "subsystem": "nbd", 00:06:28.108 "config": [] 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "subsystem": "nvmf", 00:06:28.108 "config": [ 00:06:28.108 { 00:06:28.108 "method": "nvmf_set_config", 00:06:28.108 "params": { 00:06:28.108 "discovery_filter": "match_any", 00:06:28.108 "admin_cmd_passthru": { 00:06:28.108 "identify_ctrlr": false 00:06:28.108 }, 00:06:28.108 "dhchap_digests": [ 00:06:28.108 "sha256", 00:06:28.108 "sha384", 00:06:28.108 "sha512" 00:06:28.108 ], 00:06:28.108 "dhchap_dhgroups": [ 00:06:28.108 "null", 00:06:28.108 "ffdhe2048", 00:06:28.108 "ffdhe3072", 00:06:28.108 "ffdhe4096", 00:06:28.108 "ffdhe6144", 00:06:28.108 "ffdhe8192" 00:06:28.108 ] 00:06:28.108 } 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "method": "nvmf_set_max_subsystems", 00:06:28.108 "params": { 00:06:28.108 "max_subsystems": 1024 00:06:28.108 } 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "method": "nvmf_set_crdt", 00:06:28.108 "params": { 00:06:28.108 "crdt1": 0, 00:06:28.108 "crdt2": 0, 00:06:28.108 "crdt3": 0 00:06:28.108 } 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "method": "nvmf_create_transport", 00:06:28.108 "params": { 00:06:28.108 "trtype": "TCP", 00:06:28.108 "max_queue_depth": 128, 00:06:28.108 "max_io_qpairs_per_ctrlr": 127, 00:06:28.108 "in_capsule_data_size": 4096, 00:06:28.108 "max_io_size": 131072, 00:06:28.108 "io_unit_size": 131072, 00:06:28.108 "max_aq_depth": 128, 00:06:28.108 "num_shared_buffers": 511, 00:06:28.108 "buf_cache_size": 4294967295, 00:06:28.108 "dif_insert_or_strip": false, 00:06:28.108 "zcopy": false, 00:06:28.108 "c2h_success": true, 00:06:28.108 "sock_priority": 0, 00:06:28.108 "abort_timeout_sec": 1, 00:06:28.108 "ack_timeout": 0, 00:06:28.108 "data_wr_pool_size": 0 00:06:28.108 } 00:06:28.108 } 00:06:28.108 ] 00:06:28.108 }, 00:06:28.108 { 00:06:28.108 "subsystem": "iscsi", 00:06:28.108 "config": [ 00:06:28.108 { 00:06:28.108 "method": "iscsi_set_options", 00:06:28.108 "params": { 00:06:28.108 "node_base": "iqn.2016-06.io.spdk", 00:06:28.108 "max_sessions": 128, 00:06:28.108 "max_connections_per_session": 2, 00:06:28.108 "max_queue_depth": 64, 00:06:28.108 "default_time2wait": 2, 00:06:28.108 "default_time2retain": 20, 00:06:28.108 "first_burst_length": 8192, 00:06:28.108 "immediate_data": true, 00:06:28.108 "allow_duplicated_isid": false, 00:06:28.108 "error_recovery_level": 0, 00:06:28.108 "nop_timeout": 60, 00:06:28.108 "nop_in_interval": 30, 00:06:28.108 "disable_chap": false, 00:06:28.108 "require_chap": false, 00:06:28.108 "mutual_chap": false, 00:06:28.108 "chap_group": 0, 00:06:28.108 "max_large_datain_per_connection": 64, 00:06:28.108 "max_r2t_per_connection": 4, 00:06:28.108 "pdu_pool_size": 36864, 00:06:28.108 "immediate_data_pool_size": 16384, 00:06:28.108 "data_out_pool_size": 2048 00:06:28.108 } 00:06:28.108 } 00:06:28.108 ] 00:06:28.108 } 00:06:28.108 ] 00:06:28.108 } 00:06:28.108 20:03:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:28.108 20:03:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57125 00:06:28.108 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57125 ']' 00:06:28.108 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57125 00:06:28.108 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:28.108 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.108 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57125 00:06:28.108 killing process with pid 57125 00:06:28.108 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.108 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.108 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57125' 00:06:28.108 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57125 00:06:28.108 20:03:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57125 00:06:30.681 20:03:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57180 00:06:30.681 20:03:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:30.681 20:03:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:35.948 20:03:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57180 00:06:35.948 20:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57180 ']' 00:06:35.948 20:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57180 00:06:35.948 20:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:35.948 20:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.948 20:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57180 00:06:35.948 killing process with pid 57180 00:06:35.948 20:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.948 20:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.948 20:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57180' 00:06:35.948 20:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57180 00:06:35.948 20:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57180 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:37.873 00:06:37.873 real 0m11.286s 00:06:37.873 user 0m10.500s 00:06:37.873 sys 0m1.200s 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.873 ************************************ 00:06:37.873 END TEST skip_rpc_with_json 00:06:37.873 ************************************ 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:37.873 20:03:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:37.873 20:03:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.873 20:03:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.873 20:03:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.873 ************************************ 00:06:37.873 START TEST skip_rpc_with_delay 00:06:37.873 ************************************ 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:37.873 [2024-10-17 20:03:23.363162] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:37.873 ************************************ 00:06:37.873 END TEST skip_rpc_with_delay 00:06:37.873 ************************************ 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.873 00:06:37.873 real 0m0.193s 00:06:37.873 user 0m0.100s 00:06:37.873 sys 0m0.090s 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.873 20:03:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:37.873 20:03:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:37.873 20:03:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:37.873 20:03:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:37.873 20:03:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.873 20:03:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.873 20:03:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.873 ************************************ 00:06:37.873 START TEST exit_on_failed_rpc_init 00:06:37.873 ************************************ 00:06:37.873 20:03:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:37.873 20:03:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57315 00:06:37.873 20:03:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57315 00:06:37.873 20:03:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57315 ']' 00:06:37.873 20:03:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.873 20:03:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.873 20:03:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.873 20:03:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.873 20:03:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.873 20:03:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:38.131 [2024-10-17 20:03:23.630807] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:06:38.131 [2024-10-17 20:03:23.631058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57315 ] 00:06:38.390 [2024-10-17 20:03:23.805954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.390 [2024-10-17 20:03:23.964419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:39.325 20:03:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:39.584 [2024-10-17 20:03:24.978026] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:06:39.584 [2024-10-17 20:03:24.978236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57333 ] 00:06:39.584 [2024-10-17 20:03:25.158830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.842 [2024-10-17 20:03:25.315704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.842 [2024-10-17 20:03:25.315845] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:39.842 [2024-10-17 20:03:25.315883] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:39.842 [2024-10-17 20:03:25.315913] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57315 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57315 ']' 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57315 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57315 00:06:40.101 killing process with pid 57315 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57315' 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57315 00:06:40.101 20:03:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57315 00:06:42.639 00:06:42.639 real 0m4.423s 00:06:42.639 user 0m4.921s 00:06:42.639 sys 0m0.710s 00:06:42.639 20:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.639 ************************************ 00:06:42.639 END TEST exit_on_failed_rpc_init 00:06:42.639 ************************************ 00:06:42.639 20:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:42.639 20:03:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:42.639 00:06:42.639 real 0m23.756s 00:06:42.639 user 0m22.511s 00:06:42.639 sys 0m2.744s 00:06:42.639 20:03:27 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.639 20:03:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.639 ************************************ 00:06:42.639 END TEST skip_rpc 00:06:42.639 ************************************ 00:06:42.639 20:03:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:42.639 20:03:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.639 20:03:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.639 20:03:28 -- common/autotest_common.sh@10 -- # set +x 00:06:42.639 ************************************ 00:06:42.639 START TEST rpc_client 00:06:42.639 ************************************ 00:06:42.639 20:03:28 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:42.639 * Looking for test storage... 00:06:42.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:42.639 20:03:28 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:42.639 20:03:28 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:42.639 20:03:28 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:42.639 20:03:28 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.639 20:03:28 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.640 20:03:28 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.640 20:03:28 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:42.640 20:03:28 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.640 20:03:28 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:42.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.640 --rc genhtml_branch_coverage=1 00:06:42.640 --rc genhtml_function_coverage=1 00:06:42.640 --rc genhtml_legend=1 00:06:42.640 --rc geninfo_all_blocks=1 00:06:42.640 --rc geninfo_unexecuted_blocks=1 00:06:42.640 00:06:42.640 ' 00:06:42.640 20:03:28 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:42.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.640 --rc genhtml_branch_coverage=1 00:06:42.640 --rc genhtml_function_coverage=1 00:06:42.640 --rc genhtml_legend=1 00:06:42.640 --rc geninfo_all_blocks=1 00:06:42.640 --rc geninfo_unexecuted_blocks=1 00:06:42.640 00:06:42.640 ' 00:06:42.640 20:03:28 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:42.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.640 --rc genhtml_branch_coverage=1 00:06:42.640 --rc genhtml_function_coverage=1 00:06:42.640 --rc genhtml_legend=1 00:06:42.640 --rc geninfo_all_blocks=1 00:06:42.640 --rc geninfo_unexecuted_blocks=1 00:06:42.640 00:06:42.640 ' 00:06:42.640 20:03:28 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:42.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.640 --rc genhtml_branch_coverage=1 00:06:42.640 --rc genhtml_function_coverage=1 00:06:42.640 --rc genhtml_legend=1 00:06:42.640 --rc geninfo_all_blocks=1 00:06:42.640 --rc geninfo_unexecuted_blocks=1 00:06:42.640 00:06:42.640 ' 00:06:42.640 20:03:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:42.640 OK 00:06:42.640 20:03:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:42.640 00:06:42.640 real 0m0.261s 00:06:42.640 user 0m0.163s 00:06:42.640 sys 0m0.107s 00:06:42.640 20:03:28 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.640 20:03:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:42.640 ************************************ 00:06:42.640 END TEST rpc_client 00:06:42.640 ************************************ 00:06:42.899 20:03:28 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:42.899 20:03:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.899 20:03:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.899 20:03:28 -- common/autotest_common.sh@10 -- # set +x 00:06:42.899 ************************************ 00:06:42.899 START TEST json_config 00:06:42.899 ************************************ 00:06:42.899 20:03:28 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:42.899 20:03:28 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:42.899 20:03:28 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:42.899 20:03:28 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:42.899 20:03:28 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:42.899 20:03:28 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.899 20:03:28 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.899 20:03:28 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.899 20:03:28 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.899 20:03:28 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.899 20:03:28 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.899 20:03:28 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.899 20:03:28 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.899 20:03:28 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.899 20:03:28 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.899 20:03:28 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.899 20:03:28 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:42.899 20:03:28 json_config -- scripts/common.sh@345 -- # : 1 00:06:42.899 20:03:28 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.899 20:03:28 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.899 20:03:28 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:42.899 20:03:28 json_config -- scripts/common.sh@353 -- # local d=1 00:06:42.899 20:03:28 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.899 20:03:28 json_config -- scripts/common.sh@355 -- # echo 1 00:06:42.899 20:03:28 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.899 20:03:28 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:42.899 20:03:28 json_config -- scripts/common.sh@353 -- # local d=2 00:06:42.899 20:03:28 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.899 20:03:28 json_config -- scripts/common.sh@355 -- # echo 2 00:06:42.899 20:03:28 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.899 20:03:28 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.899 20:03:28 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.899 20:03:28 json_config -- scripts/common.sh@368 -- # return 0 00:06:42.899 20:03:28 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.899 20:03:28 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:42.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.899 --rc genhtml_branch_coverage=1 00:06:42.899 --rc genhtml_function_coverage=1 00:06:42.899 --rc genhtml_legend=1 00:06:42.899 --rc geninfo_all_blocks=1 00:06:42.899 --rc geninfo_unexecuted_blocks=1 00:06:42.899 00:06:42.899 ' 00:06:42.899 20:03:28 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:42.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.899 --rc genhtml_branch_coverage=1 00:06:42.899 --rc genhtml_function_coverage=1 00:06:42.899 --rc genhtml_legend=1 00:06:42.899 --rc geninfo_all_blocks=1 00:06:42.899 --rc geninfo_unexecuted_blocks=1 00:06:42.899 00:06:42.899 ' 00:06:42.899 20:03:28 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:42.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.899 --rc genhtml_branch_coverage=1 00:06:42.899 --rc genhtml_function_coverage=1 00:06:42.899 --rc genhtml_legend=1 00:06:42.899 --rc geninfo_all_blocks=1 00:06:42.899 --rc geninfo_unexecuted_blocks=1 00:06:42.899 00:06:42.899 ' 00:06:42.899 20:03:28 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:42.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.899 --rc genhtml_branch_coverage=1 00:06:42.899 --rc genhtml_function_coverage=1 00:06:42.899 --rc genhtml_legend=1 00:06:42.899 --rc geninfo_all_blocks=1 00:06:42.899 --rc geninfo_unexecuted_blocks=1 00:06:42.899 00:06:42.899 ' 00:06:42.899 20:03:28 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:42.899 20:03:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:42.899 20:03:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.899 20:03:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.899 20:03:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.899 20:03:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.899 20:03:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.899 20:03:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b170c76-5239-45ab-b67f-1abff7414b97 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2b170c76-5239-45ab-b67f-1abff7414b97 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:42.900 20:03:28 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.900 20:03:28 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.900 20:03:28 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.900 20:03:28 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.900 20:03:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.900 20:03:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.900 20:03:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.900 20:03:28 json_config -- paths/export.sh@5 -- # export PATH 00:06:42.900 20:03:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@51 -- # : 0 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:42.900 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:42.900 20:03:28 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:42.900 20:03:28 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:42.900 WARNING: No tests are enabled so not running JSON configuration tests 00:06:42.900 20:03:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:42.900 20:03:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:42.900 20:03:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:42.900 20:03:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:42.900 20:03:28 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:42.900 20:03:28 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:42.900 ************************************ 00:06:42.900 END TEST json_config 00:06:42.900 ************************************ 00:06:42.900 00:06:42.900 real 0m0.216s 00:06:42.900 user 0m0.142s 00:06:42.900 sys 0m0.074s 00:06:42.900 20:03:28 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.900 20:03:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.159 20:03:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:43.159 20:03:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.159 20:03:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.159 20:03:28 -- common/autotest_common.sh@10 -- # set +x 00:06:43.159 ************************************ 00:06:43.159 START TEST json_config_extra_key 00:06:43.159 ************************************ 00:06:43.159 20:03:28 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:43.159 20:03:28 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:43.159 20:03:28 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:43.159 20:03:28 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:43.159 20:03:28 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.159 20:03:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.160 20:03:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.160 20:03:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:43.160 20:03:28 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.160 20:03:28 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:43.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.160 --rc genhtml_branch_coverage=1 00:06:43.160 --rc genhtml_function_coverage=1 00:06:43.160 --rc genhtml_legend=1 00:06:43.160 --rc geninfo_all_blocks=1 00:06:43.160 --rc geninfo_unexecuted_blocks=1 00:06:43.160 00:06:43.160 ' 00:06:43.160 20:03:28 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:43.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.160 --rc genhtml_branch_coverage=1 00:06:43.160 --rc genhtml_function_coverage=1 00:06:43.160 --rc genhtml_legend=1 00:06:43.160 --rc geninfo_all_blocks=1 00:06:43.160 --rc geninfo_unexecuted_blocks=1 00:06:43.160 00:06:43.160 ' 00:06:43.160 20:03:28 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:43.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.160 --rc genhtml_branch_coverage=1 00:06:43.160 --rc genhtml_function_coverage=1 00:06:43.160 --rc genhtml_legend=1 00:06:43.160 --rc geninfo_all_blocks=1 00:06:43.160 --rc geninfo_unexecuted_blocks=1 00:06:43.160 00:06:43.160 ' 00:06:43.160 20:03:28 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:43.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.160 --rc genhtml_branch_coverage=1 00:06:43.160 --rc genhtml_function_coverage=1 00:06:43.160 --rc genhtml_legend=1 00:06:43.160 --rc geninfo_all_blocks=1 00:06:43.160 --rc geninfo_unexecuted_blocks=1 00:06:43.160 00:06:43.160 ' 00:06:43.160 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b170c76-5239-45ab-b67f-1abff7414b97 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2b170c76-5239-45ab-b67f-1abff7414b97 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.160 20:03:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.160 20:03:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.160 20:03:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.160 20:03:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.160 20:03:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.160 20:03:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.160 20:03:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.160 20:03:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:43.160 20:03:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.160 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.160 20:03:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.160 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:43.160 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:43.160 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:43.160 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:43.160 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:43.160 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:43.160 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:43.160 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:43.160 INFO: launching applications... 00:06:43.160 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:43.160 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:43.160 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:43.160 20:03:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:43.160 20:03:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:43.160 20:03:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:43.160 20:03:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:43.160 20:03:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:43.160 20:03:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:43.160 20:03:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.160 20:03:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.160 20:03:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57543 00:06:43.160 20:03:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:43.160 Waiting for target to run... 00:06:43.160 20:03:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57543 /var/tmp/spdk_tgt.sock 00:06:43.160 20:03:28 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:43.160 20:03:28 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57543 ']' 00:06:43.160 20:03:28 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:43.160 20:03:28 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.160 20:03:28 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:43.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:43.160 20:03:28 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.160 20:03:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:43.419 [2024-10-17 20:03:28.898276] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:06:43.419 [2024-10-17 20:03:28.898745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57543 ] 00:06:43.985 [2024-10-17 20:03:29.351121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.985 [2024-10-17 20:03:29.473359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.552 00:06:44.552 INFO: shutting down applications... 00:06:44.552 20:03:30 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.552 20:03:30 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:44.552 20:03:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:44.553 20:03:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:44.553 20:03:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:44.553 20:03:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:44.553 20:03:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:44.553 20:03:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57543 ]] 00:06:44.553 20:03:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57543 00:06:44.553 20:03:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:44.553 20:03:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:44.553 20:03:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:06:44.553 20:03:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:45.119 20:03:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:45.119 20:03:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:45.119 20:03:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:06:45.119 20:03:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:45.685 20:03:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:45.685 20:03:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:45.685 20:03:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:06:45.685 20:03:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:46.252 20:03:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:46.252 20:03:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.252 20:03:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:06:46.252 20:03:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:46.817 20:03:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:46.817 20:03:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.817 20:03:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:06:46.817 20:03:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:47.075 20:03:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:47.075 20:03:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.075 20:03:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:06:47.075 20:03:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:47.642 20:03:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:47.642 20:03:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.642 20:03:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:06:47.642 20:03:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:47.642 20:03:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:47.642 20:03:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:47.642 SPDK target shutdown done 00:06:47.642 20:03:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:47.642 Success 00:06:47.642 20:03:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:47.642 ************************************ 00:06:47.642 END TEST json_config_extra_key 00:06:47.642 ************************************ 00:06:47.642 00:06:47.642 real 0m4.612s 00:06:47.642 user 0m3.959s 00:06:47.642 sys 0m0.665s 00:06:47.642 20:03:33 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.642 20:03:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:47.642 20:03:33 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:47.642 20:03:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.642 20:03:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.642 20:03:33 -- common/autotest_common.sh@10 -- # set +x 00:06:47.642 ************************************ 00:06:47.642 START TEST alias_rpc 00:06:47.642 ************************************ 00:06:47.642 20:03:33 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:47.901 * Looking for test storage... 00:06:47.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.901 20:03:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:47.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.901 --rc genhtml_branch_coverage=1 00:06:47.901 --rc genhtml_function_coverage=1 00:06:47.901 --rc genhtml_legend=1 00:06:47.901 --rc geninfo_all_blocks=1 00:06:47.901 --rc geninfo_unexecuted_blocks=1 00:06:47.901 00:06:47.901 ' 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:47.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.901 --rc genhtml_branch_coverage=1 00:06:47.901 --rc genhtml_function_coverage=1 00:06:47.901 --rc genhtml_legend=1 00:06:47.901 --rc geninfo_all_blocks=1 00:06:47.901 --rc geninfo_unexecuted_blocks=1 00:06:47.901 00:06:47.901 ' 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:47.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.901 --rc genhtml_branch_coverage=1 00:06:47.901 --rc genhtml_function_coverage=1 00:06:47.901 --rc genhtml_legend=1 00:06:47.901 --rc geninfo_all_blocks=1 00:06:47.901 --rc geninfo_unexecuted_blocks=1 00:06:47.901 00:06:47.901 ' 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:47.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.901 --rc genhtml_branch_coverage=1 00:06:47.901 --rc genhtml_function_coverage=1 00:06:47.901 --rc genhtml_legend=1 00:06:47.901 --rc geninfo_all_blocks=1 00:06:47.901 --rc geninfo_unexecuted_blocks=1 00:06:47.901 00:06:47.901 ' 00:06:47.901 20:03:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:47.901 20:03:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57649 00:06:47.901 20:03:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57649 00:06:47.901 20:03:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57649 ']' 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.901 20:03:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.160 [2024-10-17 20:03:33.616066] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:06:48.160 [2024-10-17 20:03:33.616519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57649 ] 00:06:48.160 [2024-10-17 20:03:33.794079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.419 [2024-10-17 20:03:33.929692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.355 20:03:34 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.355 20:03:34 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:49.355 20:03:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:49.613 20:03:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57649 00:06:49.613 20:03:35 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57649 ']' 00:06:49.613 20:03:35 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57649 00:06:49.613 20:03:35 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:49.613 20:03:35 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.613 20:03:35 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57649 00:06:49.613 killing process with pid 57649 00:06:49.613 20:03:35 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.613 20:03:35 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.613 20:03:35 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57649' 00:06:49.613 20:03:35 alias_rpc -- common/autotest_common.sh@969 -- # kill 57649 00:06:49.613 20:03:35 alias_rpc -- common/autotest_common.sh@974 -- # wait 57649 00:06:52.143 ************************************ 00:06:52.143 END TEST alias_rpc 00:06:52.143 ************************************ 00:06:52.143 00:06:52.143 real 0m4.096s 00:06:52.143 user 0m4.146s 00:06:52.143 sys 0m0.670s 00:06:52.143 20:03:37 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.143 20:03:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.143 20:03:37 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:52.143 20:03:37 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:52.143 20:03:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.143 20:03:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.143 20:03:37 -- common/autotest_common.sh@10 -- # set +x 00:06:52.143 ************************************ 00:06:52.143 START TEST spdkcli_tcp 00:06:52.143 ************************************ 00:06:52.143 20:03:37 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:52.143 * Looking for test storage... 00:06:52.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.144 20:03:37 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:52.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.144 --rc genhtml_branch_coverage=1 00:06:52.144 --rc genhtml_function_coverage=1 00:06:52.144 --rc genhtml_legend=1 00:06:52.144 --rc geninfo_all_blocks=1 00:06:52.144 --rc geninfo_unexecuted_blocks=1 00:06:52.144 00:06:52.144 ' 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:52.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.144 --rc genhtml_branch_coverage=1 00:06:52.144 --rc genhtml_function_coverage=1 00:06:52.144 --rc genhtml_legend=1 00:06:52.144 --rc geninfo_all_blocks=1 00:06:52.144 --rc geninfo_unexecuted_blocks=1 00:06:52.144 00:06:52.144 ' 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:52.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.144 --rc genhtml_branch_coverage=1 00:06:52.144 --rc genhtml_function_coverage=1 00:06:52.144 --rc genhtml_legend=1 00:06:52.144 --rc geninfo_all_blocks=1 00:06:52.144 --rc geninfo_unexecuted_blocks=1 00:06:52.144 00:06:52.144 ' 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:52.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.144 --rc genhtml_branch_coverage=1 00:06:52.144 --rc genhtml_function_coverage=1 00:06:52.144 --rc genhtml_legend=1 00:06:52.144 --rc geninfo_all_blocks=1 00:06:52.144 --rc geninfo_unexecuted_blocks=1 00:06:52.144 00:06:52.144 ' 00:06:52.144 20:03:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:52.144 20:03:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:52.144 20:03:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:52.144 20:03:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:52.144 20:03:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:52.144 20:03:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:52.144 20:03:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.144 20:03:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57756 00:06:52.144 20:03:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57756 00:06:52.144 20:03:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57756 ']' 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.144 20:03:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.144 [2024-10-17 20:03:37.758787] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:06:52.144 [2024-10-17 20:03:37.759014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57756 ] 00:06:52.403 [2024-10-17 20:03:37.940882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.661 [2024-10-17 20:03:38.098351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.661 [2024-10-17 20:03:38.098359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.597 20:03:38 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.597 20:03:38 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:53.597 20:03:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:53.597 20:03:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57784 00:06:53.597 20:03:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:53.856 [ 00:06:53.856 "bdev_malloc_delete", 00:06:53.856 "bdev_malloc_create", 00:06:53.856 "bdev_null_resize", 00:06:53.856 "bdev_null_delete", 00:06:53.856 "bdev_null_create", 00:06:53.856 "bdev_nvme_cuse_unregister", 00:06:53.856 "bdev_nvme_cuse_register", 00:06:53.856 "bdev_opal_new_user", 00:06:53.856 "bdev_opal_set_lock_state", 00:06:53.856 "bdev_opal_delete", 00:06:53.856 "bdev_opal_get_info", 00:06:53.856 "bdev_opal_create", 00:06:53.856 "bdev_nvme_opal_revert", 00:06:53.856 "bdev_nvme_opal_init", 00:06:53.856 "bdev_nvme_send_cmd", 00:06:53.856 "bdev_nvme_set_keys", 00:06:53.856 "bdev_nvme_get_path_iostat", 00:06:53.856 "bdev_nvme_get_mdns_discovery_info", 00:06:53.856 "bdev_nvme_stop_mdns_discovery", 00:06:53.856 "bdev_nvme_start_mdns_discovery", 00:06:53.856 "bdev_nvme_set_multipath_policy", 00:06:53.856 "bdev_nvme_set_preferred_path", 00:06:53.856 "bdev_nvme_get_io_paths", 00:06:53.856 "bdev_nvme_remove_error_injection", 00:06:53.856 "bdev_nvme_add_error_injection", 00:06:53.856 "bdev_nvme_get_discovery_info", 00:06:53.856 "bdev_nvme_stop_discovery", 00:06:53.856 "bdev_nvme_start_discovery", 00:06:53.856 "bdev_nvme_get_controller_health_info", 00:06:53.856 "bdev_nvme_disable_controller", 00:06:53.856 "bdev_nvme_enable_controller", 00:06:53.856 "bdev_nvme_reset_controller", 00:06:53.856 "bdev_nvme_get_transport_statistics", 00:06:53.856 "bdev_nvme_apply_firmware", 00:06:53.856 "bdev_nvme_detach_controller", 00:06:53.856 "bdev_nvme_get_controllers", 00:06:53.856 "bdev_nvme_attach_controller", 00:06:53.856 "bdev_nvme_set_hotplug", 00:06:53.856 "bdev_nvme_set_options", 00:06:53.856 "bdev_passthru_delete", 00:06:53.856 "bdev_passthru_create", 00:06:53.856 "bdev_lvol_set_parent_bdev", 00:06:53.856 "bdev_lvol_set_parent", 00:06:53.856 "bdev_lvol_check_shallow_copy", 00:06:53.856 "bdev_lvol_start_shallow_copy", 00:06:53.856 "bdev_lvol_grow_lvstore", 00:06:53.856 "bdev_lvol_get_lvols", 00:06:53.856 "bdev_lvol_get_lvstores", 00:06:53.856 "bdev_lvol_delete", 00:06:53.856 "bdev_lvol_set_read_only", 00:06:53.856 "bdev_lvol_resize", 00:06:53.856 "bdev_lvol_decouple_parent", 00:06:53.856 "bdev_lvol_inflate", 00:06:53.856 "bdev_lvol_rename", 00:06:53.856 "bdev_lvol_clone_bdev", 00:06:53.856 "bdev_lvol_clone", 00:06:53.856 "bdev_lvol_snapshot", 00:06:53.856 "bdev_lvol_create", 00:06:53.856 "bdev_lvol_delete_lvstore", 00:06:53.856 "bdev_lvol_rename_lvstore", 00:06:53.856 "bdev_lvol_create_lvstore", 00:06:53.856 "bdev_raid_set_options", 00:06:53.856 "bdev_raid_remove_base_bdev", 00:06:53.856 "bdev_raid_add_base_bdev", 00:06:53.856 "bdev_raid_delete", 00:06:53.856 "bdev_raid_create", 00:06:53.856 "bdev_raid_get_bdevs", 00:06:53.856 "bdev_error_inject_error", 00:06:53.856 "bdev_error_delete", 00:06:53.856 "bdev_error_create", 00:06:53.856 "bdev_split_delete", 00:06:53.856 "bdev_split_create", 00:06:53.856 "bdev_delay_delete", 00:06:53.856 "bdev_delay_create", 00:06:53.856 "bdev_delay_update_latency", 00:06:53.856 "bdev_zone_block_delete", 00:06:53.856 "bdev_zone_block_create", 00:06:53.856 "blobfs_create", 00:06:53.856 "blobfs_detect", 00:06:53.857 "blobfs_set_cache_size", 00:06:53.857 "bdev_aio_delete", 00:06:53.857 "bdev_aio_rescan", 00:06:53.857 "bdev_aio_create", 00:06:53.857 "bdev_ftl_set_property", 00:06:53.857 "bdev_ftl_get_properties", 00:06:53.857 "bdev_ftl_get_stats", 00:06:53.857 "bdev_ftl_unmap", 00:06:53.857 "bdev_ftl_unload", 00:06:53.857 "bdev_ftl_delete", 00:06:53.857 "bdev_ftl_load", 00:06:53.857 "bdev_ftl_create", 00:06:53.857 "bdev_virtio_attach_controller", 00:06:53.857 "bdev_virtio_scsi_get_devices", 00:06:53.857 "bdev_virtio_detach_controller", 00:06:53.857 "bdev_virtio_blk_set_hotplug", 00:06:53.857 "bdev_iscsi_delete", 00:06:53.857 "bdev_iscsi_create", 00:06:53.857 "bdev_iscsi_set_options", 00:06:53.857 "accel_error_inject_error", 00:06:53.857 "ioat_scan_accel_module", 00:06:53.857 "dsa_scan_accel_module", 00:06:53.857 "iaa_scan_accel_module", 00:06:53.857 "keyring_file_remove_key", 00:06:53.857 "keyring_file_add_key", 00:06:53.857 "keyring_linux_set_options", 00:06:53.857 "fsdev_aio_delete", 00:06:53.857 "fsdev_aio_create", 00:06:53.857 "iscsi_get_histogram", 00:06:53.857 "iscsi_enable_histogram", 00:06:53.857 "iscsi_set_options", 00:06:53.857 "iscsi_get_auth_groups", 00:06:53.857 "iscsi_auth_group_remove_secret", 00:06:53.857 "iscsi_auth_group_add_secret", 00:06:53.857 "iscsi_delete_auth_group", 00:06:53.857 "iscsi_create_auth_group", 00:06:53.857 "iscsi_set_discovery_auth", 00:06:53.857 "iscsi_get_options", 00:06:53.857 "iscsi_target_node_request_logout", 00:06:53.857 "iscsi_target_node_set_redirect", 00:06:53.857 "iscsi_target_node_set_auth", 00:06:53.857 "iscsi_target_node_add_lun", 00:06:53.857 "iscsi_get_stats", 00:06:53.857 "iscsi_get_connections", 00:06:53.857 "iscsi_portal_group_set_auth", 00:06:53.857 "iscsi_start_portal_group", 00:06:53.857 "iscsi_delete_portal_group", 00:06:53.857 "iscsi_create_portal_group", 00:06:53.857 "iscsi_get_portal_groups", 00:06:53.857 "iscsi_delete_target_node", 00:06:53.857 "iscsi_target_node_remove_pg_ig_maps", 00:06:53.857 "iscsi_target_node_add_pg_ig_maps", 00:06:53.857 "iscsi_create_target_node", 00:06:53.857 "iscsi_get_target_nodes", 00:06:53.857 "iscsi_delete_initiator_group", 00:06:53.857 "iscsi_initiator_group_remove_initiators", 00:06:53.857 "iscsi_initiator_group_add_initiators", 00:06:53.857 "iscsi_create_initiator_group", 00:06:53.857 "iscsi_get_initiator_groups", 00:06:53.857 "nvmf_set_crdt", 00:06:53.857 "nvmf_set_config", 00:06:53.857 "nvmf_set_max_subsystems", 00:06:53.857 "nvmf_stop_mdns_prr", 00:06:53.857 "nvmf_publish_mdns_prr", 00:06:53.857 "nvmf_subsystem_get_listeners", 00:06:53.857 "nvmf_subsystem_get_qpairs", 00:06:53.857 "nvmf_subsystem_get_controllers", 00:06:53.857 "nvmf_get_stats", 00:06:53.857 "nvmf_get_transports", 00:06:53.857 "nvmf_create_transport", 00:06:53.857 "nvmf_get_targets", 00:06:53.857 "nvmf_delete_target", 00:06:53.857 "nvmf_create_target", 00:06:53.857 "nvmf_subsystem_allow_any_host", 00:06:53.857 "nvmf_subsystem_set_keys", 00:06:53.857 "nvmf_subsystem_remove_host", 00:06:53.857 "nvmf_subsystem_add_host", 00:06:53.857 "nvmf_ns_remove_host", 00:06:53.857 "nvmf_ns_add_host", 00:06:53.857 "nvmf_subsystem_remove_ns", 00:06:53.857 "nvmf_subsystem_set_ns_ana_group", 00:06:53.857 "nvmf_subsystem_add_ns", 00:06:53.857 "nvmf_subsystem_listener_set_ana_state", 00:06:53.857 "nvmf_discovery_get_referrals", 00:06:53.857 "nvmf_discovery_remove_referral", 00:06:53.857 "nvmf_discovery_add_referral", 00:06:53.857 "nvmf_subsystem_remove_listener", 00:06:53.857 "nvmf_subsystem_add_listener", 00:06:53.857 "nvmf_delete_subsystem", 00:06:53.857 "nvmf_create_subsystem", 00:06:53.857 "nvmf_get_subsystems", 00:06:53.857 "env_dpdk_get_mem_stats", 00:06:53.857 "nbd_get_disks", 00:06:53.857 "nbd_stop_disk", 00:06:53.857 "nbd_start_disk", 00:06:53.857 "ublk_recover_disk", 00:06:53.857 "ublk_get_disks", 00:06:53.857 "ublk_stop_disk", 00:06:53.857 "ublk_start_disk", 00:06:53.857 "ublk_destroy_target", 00:06:53.857 "ublk_create_target", 00:06:53.857 "virtio_blk_create_transport", 00:06:53.857 "virtio_blk_get_transports", 00:06:53.857 "vhost_controller_set_coalescing", 00:06:53.857 "vhost_get_controllers", 00:06:53.857 "vhost_delete_controller", 00:06:53.857 "vhost_create_blk_controller", 00:06:53.857 "vhost_scsi_controller_remove_target", 00:06:53.857 "vhost_scsi_controller_add_target", 00:06:53.857 "vhost_start_scsi_controller", 00:06:53.857 "vhost_create_scsi_controller", 00:06:53.857 "thread_set_cpumask", 00:06:53.857 "scheduler_set_options", 00:06:53.857 "framework_get_governor", 00:06:53.857 "framework_get_scheduler", 00:06:53.857 "framework_set_scheduler", 00:06:53.857 "framework_get_reactors", 00:06:53.857 "thread_get_io_channels", 00:06:53.857 "thread_get_pollers", 00:06:53.857 "thread_get_stats", 00:06:53.857 "framework_monitor_context_switch", 00:06:53.857 "spdk_kill_instance", 00:06:53.857 "log_enable_timestamps", 00:06:53.857 "log_get_flags", 00:06:53.857 "log_clear_flag", 00:06:53.857 "log_set_flag", 00:06:53.857 "log_get_level", 00:06:53.857 "log_set_level", 00:06:53.857 "log_get_print_level", 00:06:53.857 "log_set_print_level", 00:06:53.857 "framework_enable_cpumask_locks", 00:06:53.857 "framework_disable_cpumask_locks", 00:06:53.857 "framework_wait_init", 00:06:53.857 "framework_start_init", 00:06:53.857 "scsi_get_devices", 00:06:53.857 "bdev_get_histogram", 00:06:53.857 "bdev_enable_histogram", 00:06:53.857 "bdev_set_qos_limit", 00:06:53.857 "bdev_set_qd_sampling_period", 00:06:53.857 "bdev_get_bdevs", 00:06:53.857 "bdev_reset_iostat", 00:06:53.857 "bdev_get_iostat", 00:06:53.857 "bdev_examine", 00:06:53.857 "bdev_wait_for_examine", 00:06:53.857 "bdev_set_options", 00:06:53.857 "accel_get_stats", 00:06:53.857 "accel_set_options", 00:06:53.857 "accel_set_driver", 00:06:53.857 "accel_crypto_key_destroy", 00:06:53.857 "accel_crypto_keys_get", 00:06:53.857 "accel_crypto_key_create", 00:06:53.857 "accel_assign_opc", 00:06:53.857 "accel_get_module_info", 00:06:53.857 "accel_get_opc_assignments", 00:06:53.857 "vmd_rescan", 00:06:53.857 "vmd_remove_device", 00:06:53.857 "vmd_enable", 00:06:53.857 "sock_get_default_impl", 00:06:53.857 "sock_set_default_impl", 00:06:53.857 "sock_impl_set_options", 00:06:53.857 "sock_impl_get_options", 00:06:53.857 "iobuf_get_stats", 00:06:53.857 "iobuf_set_options", 00:06:53.857 "keyring_get_keys", 00:06:53.857 "framework_get_pci_devices", 00:06:53.857 "framework_get_config", 00:06:53.857 "framework_get_subsystems", 00:06:53.857 "fsdev_set_opts", 00:06:53.857 "fsdev_get_opts", 00:06:53.857 "trace_get_info", 00:06:53.857 "trace_get_tpoint_group_mask", 00:06:53.857 "trace_disable_tpoint_group", 00:06:53.857 "trace_enable_tpoint_group", 00:06:53.857 "trace_clear_tpoint_mask", 00:06:53.857 "trace_set_tpoint_mask", 00:06:53.857 "notify_get_notifications", 00:06:53.857 "notify_get_types", 00:06:53.857 "spdk_get_version", 00:06:53.857 "rpc_get_methods" 00:06:53.857 ] 00:06:53.857 20:03:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:53.857 20:03:39 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.857 20:03:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.857 20:03:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:53.857 20:03:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57756 00:06:53.857 20:03:39 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57756 ']' 00:06:53.857 20:03:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57756 00:06:53.857 20:03:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:53.857 20:03:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.857 20:03:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57756 00:06:53.857 20:03:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.857 20:03:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.858 20:03:39 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57756' 00:06:53.858 killing process with pid 57756 00:06:53.858 20:03:39 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57756 00:06:53.858 20:03:39 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57756 00:06:56.416 ************************************ 00:06:56.416 END TEST spdkcli_tcp 00:06:56.416 ************************************ 00:06:56.416 00:06:56.416 real 0m4.179s 00:06:56.416 user 0m7.509s 00:06:56.416 sys 0m0.719s 00:06:56.416 20:03:41 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.416 20:03:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:56.416 20:03:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:56.416 20:03:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.416 20:03:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.416 20:03:41 -- common/autotest_common.sh@10 -- # set +x 00:06:56.416 ************************************ 00:06:56.416 START TEST dpdk_mem_utility 00:06:56.416 ************************************ 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:56.416 * Looking for test storage... 00:06:56.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.416 20:03:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:56.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.416 --rc genhtml_branch_coverage=1 00:06:56.416 --rc genhtml_function_coverage=1 00:06:56.416 --rc genhtml_legend=1 00:06:56.416 --rc geninfo_all_blocks=1 00:06:56.416 --rc geninfo_unexecuted_blocks=1 00:06:56.416 00:06:56.416 ' 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:56.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.416 --rc genhtml_branch_coverage=1 00:06:56.416 --rc genhtml_function_coverage=1 00:06:56.416 --rc genhtml_legend=1 00:06:56.416 --rc geninfo_all_blocks=1 00:06:56.416 --rc geninfo_unexecuted_blocks=1 00:06:56.416 00:06:56.416 ' 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:56.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.416 --rc genhtml_branch_coverage=1 00:06:56.416 --rc genhtml_function_coverage=1 00:06:56.416 --rc genhtml_legend=1 00:06:56.416 --rc geninfo_all_blocks=1 00:06:56.416 --rc geninfo_unexecuted_blocks=1 00:06:56.416 00:06:56.416 ' 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:56.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.416 --rc genhtml_branch_coverage=1 00:06:56.416 --rc genhtml_function_coverage=1 00:06:56.416 --rc genhtml_legend=1 00:06:56.416 --rc geninfo_all_blocks=1 00:06:56.416 --rc geninfo_unexecuted_blocks=1 00:06:56.416 00:06:56.416 ' 00:06:56.416 20:03:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:56.416 20:03:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57884 00:06:56.416 20:03:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:56.416 20:03:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57884 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57884 ']' 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.416 20:03:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:56.416 [2024-10-17 20:03:41.995567] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:06:56.416 [2024-10-17 20:03:41.995789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57884 ] 00:06:56.675 [2024-10-17 20:03:42.177191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.934 [2024-10-17 20:03:42.331162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.871 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.871 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:57.871 20:03:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:57.871 20:03:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:57.871 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.871 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:57.871 { 00:06:57.871 "filename": "/tmp/spdk_mem_dump.txt" 00:06:57.871 } 00:06:57.871 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.871 20:03:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:57.871 DPDK memory size 816.000000 MiB in 1 heap(s) 00:06:57.871 1 heaps totaling size 816.000000 MiB 00:06:57.871 size: 816.000000 MiB heap id: 0 00:06:57.871 end heaps---------- 00:06:57.871 9 mempools totaling size 595.772034 MiB 00:06:57.871 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:57.871 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:57.871 size: 92.545471 MiB name: bdev_io_57884 00:06:57.871 size: 50.003479 MiB name: msgpool_57884 00:06:57.871 size: 36.509338 MiB name: fsdev_io_57884 00:06:57.871 size: 21.763794 MiB name: PDU_Pool 00:06:57.871 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:57.871 size: 4.133484 MiB name: evtpool_57884 00:06:57.871 size: 0.026123 MiB name: Session_Pool 00:06:57.871 end mempools------- 00:06:57.871 6 memzones totaling size 4.142822 MiB 00:06:57.871 size: 1.000366 MiB name: RG_ring_0_57884 00:06:57.871 size: 1.000366 MiB name: RG_ring_1_57884 00:06:57.871 size: 1.000366 MiB name: RG_ring_4_57884 00:06:57.871 size: 1.000366 MiB name: RG_ring_5_57884 00:06:57.871 size: 0.125366 MiB name: RG_ring_2_57884 00:06:57.871 size: 0.015991 MiB name: RG_ring_3_57884 00:06:57.871 end memzones------- 00:06:57.871 20:03:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:57.871 heap id: 0 total size: 816.000000 MiB number of busy elements: 318 number of free elements: 18 00:06:57.871 list of free elements. size: 16.790649 MiB 00:06:57.871 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:57.871 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:57.871 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:57.871 element at address: 0x200018d00040 with size: 0.999939 MiB 00:06:57.871 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:57.871 element at address: 0x200019200000 with size: 0.999084 MiB 00:06:57.871 element at address: 0x200031e00000 with size: 0.994324 MiB 00:06:57.871 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:57.871 element at address: 0x200018a00000 with size: 0.959656 MiB 00:06:57.871 element at address: 0x200019500040 with size: 0.936401 MiB 00:06:57.871 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:57.871 element at address: 0x20001ac00000 with size: 0.561218 MiB 00:06:57.871 element at address: 0x200000c00000 with size: 0.490173 MiB 00:06:57.871 element at address: 0x200018e00000 with size: 0.487976 MiB 00:06:57.871 element at address: 0x200019600000 with size: 0.485413 MiB 00:06:57.871 element at address: 0x200012c00000 with size: 0.443237 MiB 00:06:57.871 element at address: 0x200028000000 with size: 0.390442 MiB 00:06:57.871 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:57.871 list of standard malloc elements. size: 199.288452 MiB 00:06:57.871 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:57.872 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:57.872 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:06:57.872 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:57.872 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:57.872 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:57.872 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:06:57.872 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:57.872 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:57.872 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:06:57.872 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:57.872 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:57.872 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:57.872 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012c71780 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012c71880 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012c71980 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012c72080 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012c72180 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:06:57.872 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:57.873 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:06:57.873 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:06:57.873 element at address: 0x200028063f40 with size: 0.000244 MiB 00:06:57.873 element at address: 0x200028064040 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806af80 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806b080 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806b180 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806b280 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806b380 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806b480 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806b580 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806b680 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806b780 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806b880 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806b980 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806be80 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806c080 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806c180 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806c280 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806c380 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806c480 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806c580 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806c680 with size: 0.000244 MiB 00:06:57.873 element at address: 0x20002806c780 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806c880 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806c980 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806d080 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806d180 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806d280 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806d380 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806d480 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806d580 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806d680 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806d780 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806d880 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806d980 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806da80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806db80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806de80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806df80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806e080 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806e180 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806e280 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806e380 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806e480 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806e580 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806e680 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806e780 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806e880 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806e980 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806f080 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806f180 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806f280 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806f380 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806f480 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806f580 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806f680 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806f780 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806f880 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806f980 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:06:57.874 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:06:57.874 list of memzone associated elements. size: 599.920898 MiB 00:06:57.874 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:06:57.874 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:57.874 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:06:57.874 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:57.874 element at address: 0x200012df4740 with size: 92.045105 MiB 00:06:57.874 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57884_0 00:06:57.874 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:57.874 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57884_0 00:06:57.874 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:57.874 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57884_0 00:06:57.874 element at address: 0x2000197be900 with size: 20.255615 MiB 00:06:57.874 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:57.874 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:06:57.874 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:57.874 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:57.874 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57884_0 00:06:57.874 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:57.874 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57884 00:06:57.874 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:57.874 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57884 00:06:57.874 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:57.874 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:57.874 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:06:57.874 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:57.874 element at address: 0x200018afde00 with size: 1.008179 MiB 00:06:57.874 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:57.874 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:06:57.874 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:57.874 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:57.874 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57884 00:06:57.874 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:57.874 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57884 00:06:57.874 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:06:57.874 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57884 00:06:57.874 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:06:57.874 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57884 00:06:57.874 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:57.874 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57884 00:06:57.874 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:57.874 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57884 00:06:57.874 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:06:57.874 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:57.874 element at address: 0x200012c72280 with size: 0.500549 MiB 00:06:57.874 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:57.874 element at address: 0x20001967c440 with size: 0.250549 MiB 00:06:57.874 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:57.874 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:57.874 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57884 00:06:57.874 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:57.874 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57884 00:06:57.874 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:06:57.874 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:57.874 element at address: 0x200028064140 with size: 0.023804 MiB 00:06:57.874 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:57.874 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:57.874 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57884 00:06:57.874 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:06:57.874 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:57.874 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:57.874 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57884 00:06:57.874 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:57.874 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57884 00:06:57.874 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:57.874 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57884 00:06:57.874 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:06:57.874 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:57.874 20:03:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:57.874 20:03:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57884 00:06:57.874 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57884 ']' 00:06:57.874 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57884 00:06:57.874 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:57.874 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.874 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57884 00:06:57.874 killing process with pid 57884 00:06:57.874 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.874 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.874 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57884' 00:06:57.874 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57884 00:06:57.874 20:03:43 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57884 00:07:00.407 00:07:00.407 real 0m3.998s 00:07:00.407 user 0m4.055s 00:07:00.407 sys 0m0.692s 00:07:00.407 20:03:45 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.407 ************************************ 00:07:00.407 END TEST dpdk_mem_utility 00:07:00.407 ************************************ 00:07:00.407 20:03:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:00.407 20:03:45 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:00.407 20:03:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.407 20:03:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.407 20:03:45 -- common/autotest_common.sh@10 -- # set +x 00:07:00.407 ************************************ 00:07:00.407 START TEST event 00:07:00.407 ************************************ 00:07:00.407 20:03:45 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:00.407 * Looking for test storage... 00:07:00.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:00.407 20:03:45 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:00.407 20:03:45 event -- common/autotest_common.sh@1691 -- # lcov --version 00:07:00.407 20:03:45 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:00.407 20:03:45 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:00.407 20:03:45 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.407 20:03:45 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.407 20:03:45 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.407 20:03:45 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.407 20:03:45 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.407 20:03:45 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.407 20:03:45 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.407 20:03:45 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.407 20:03:45 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.407 20:03:45 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.407 20:03:45 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.407 20:03:45 event -- scripts/common.sh@344 -- # case "$op" in 00:07:00.407 20:03:45 event -- scripts/common.sh@345 -- # : 1 00:07:00.407 20:03:45 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.407 20:03:45 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.407 20:03:45 event -- scripts/common.sh@365 -- # decimal 1 00:07:00.407 20:03:45 event -- scripts/common.sh@353 -- # local d=1 00:07:00.407 20:03:45 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.407 20:03:45 event -- scripts/common.sh@355 -- # echo 1 00:07:00.407 20:03:45 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.407 20:03:45 event -- scripts/common.sh@366 -- # decimal 2 00:07:00.407 20:03:45 event -- scripts/common.sh@353 -- # local d=2 00:07:00.407 20:03:45 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.407 20:03:45 event -- scripts/common.sh@355 -- # echo 2 00:07:00.407 20:03:45 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.407 20:03:45 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.407 20:03:45 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.407 20:03:45 event -- scripts/common.sh@368 -- # return 0 00:07:00.407 20:03:45 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.407 20:03:45 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:00.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.407 --rc genhtml_branch_coverage=1 00:07:00.407 --rc genhtml_function_coverage=1 00:07:00.407 --rc genhtml_legend=1 00:07:00.407 --rc geninfo_all_blocks=1 00:07:00.407 --rc geninfo_unexecuted_blocks=1 00:07:00.407 00:07:00.407 ' 00:07:00.407 20:03:45 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:00.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.407 --rc genhtml_branch_coverage=1 00:07:00.407 --rc genhtml_function_coverage=1 00:07:00.407 --rc genhtml_legend=1 00:07:00.407 --rc geninfo_all_blocks=1 00:07:00.407 --rc geninfo_unexecuted_blocks=1 00:07:00.407 00:07:00.407 ' 00:07:00.407 20:03:45 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:00.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.407 --rc genhtml_branch_coverage=1 00:07:00.407 --rc genhtml_function_coverage=1 00:07:00.407 --rc genhtml_legend=1 00:07:00.407 --rc geninfo_all_blocks=1 00:07:00.407 --rc geninfo_unexecuted_blocks=1 00:07:00.407 00:07:00.407 ' 00:07:00.407 20:03:45 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:00.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.407 --rc genhtml_branch_coverage=1 00:07:00.407 --rc genhtml_function_coverage=1 00:07:00.407 --rc genhtml_legend=1 00:07:00.407 --rc geninfo_all_blocks=1 00:07:00.407 --rc geninfo_unexecuted_blocks=1 00:07:00.407 00:07:00.407 ' 00:07:00.407 20:03:45 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:00.408 20:03:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.408 20:03:45 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:00.408 20:03:45 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:00.408 20:03:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.408 20:03:45 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.408 ************************************ 00:07:00.408 START TEST event_perf 00:07:00.408 ************************************ 00:07:00.408 20:03:45 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:00.408 Running I/O for 1 seconds...[2024-10-17 20:03:45.975097] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:07:00.408 [2024-10-17 20:03:45.975427] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57992 ] 00:07:00.667 [2024-10-17 20:03:46.151520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.667 [2024-10-17 20:03:46.289902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.667 [2024-10-17 20:03:46.290023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.667 Running I/O for 1 seconds...[2024-10-17 20:03:46.290310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.667 [2024-10-17 20:03:46.291133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.043 00:07:02.043 lcore 0: 120086 00:07:02.043 lcore 1: 120087 00:07:02.043 lcore 2: 120087 00:07:02.043 lcore 3: 120087 00:07:02.043 done. 00:07:02.043 00:07:02.043 real 0m1.588s 00:07:02.043 user 0m4.318s 00:07:02.043 sys 0m0.141s 00:07:02.043 ************************************ 00:07:02.043 END TEST event_perf 00:07:02.043 ************************************ 00:07:02.043 20:03:47 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.043 20:03:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.043 20:03:47 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:02.043 20:03:47 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:02.043 20:03:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.043 20:03:47 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.043 ************************************ 00:07:02.043 START TEST event_reactor 00:07:02.043 ************************************ 00:07:02.043 20:03:47 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:02.043 [2024-10-17 20:03:47.609195] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:07:02.043 [2024-10-17 20:03:47.609331] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58031 ] 00:07:02.302 [2024-10-17 20:03:47.770306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.302 [2024-10-17 20:03:47.893325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.678 test_start 00:07:03.678 oneshot 00:07:03.678 tick 100 00:07:03.678 tick 100 00:07:03.678 tick 250 00:07:03.678 tick 100 00:07:03.678 tick 100 00:07:03.678 tick 100 00:07:03.678 tick 250 00:07:03.678 tick 500 00:07:03.678 tick 100 00:07:03.678 tick 100 00:07:03.678 tick 250 00:07:03.678 tick 100 00:07:03.678 tick 100 00:07:03.678 test_end 00:07:03.678 ************************************ 00:07:03.678 END TEST event_reactor 00:07:03.678 ************************************ 00:07:03.678 00:07:03.678 real 0m1.539s 00:07:03.678 user 0m1.340s 00:07:03.678 sys 0m0.090s 00:07:03.678 20:03:49 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.678 20:03:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:03.678 20:03:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:03.678 20:03:49 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:03.678 20:03:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.678 20:03:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.678 ************************************ 00:07:03.678 START TEST event_reactor_perf 00:07:03.678 ************************************ 00:07:03.678 20:03:49 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:03.678 [2024-10-17 20:03:49.215299] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:07:03.678 [2024-10-17 20:03:49.215505] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58068 ] 00:07:03.938 [2024-10-17 20:03:49.391827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.938 [2024-10-17 20:03:49.504547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.314 test_start 00:07:05.314 test_end 00:07:05.314 Performance: 309521 events per second 00:07:05.314 00:07:05.314 real 0m1.562s 00:07:05.314 user 0m1.340s 00:07:05.314 sys 0m0.113s 00:07:05.314 20:03:50 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.314 ************************************ 00:07:05.314 END TEST event_reactor_perf 00:07:05.314 ************************************ 00:07:05.314 20:03:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:05.314 20:03:50 event -- event/event.sh@49 -- # uname -s 00:07:05.314 20:03:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:05.314 20:03:50 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:05.314 20:03:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.314 20:03:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.314 20:03:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.314 ************************************ 00:07:05.314 START TEST event_scheduler 00:07:05.314 ************************************ 00:07:05.314 20:03:50 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:05.314 * Looking for test storage... 00:07:05.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:05.314 20:03:50 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:05.314 20:03:50 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:07:05.314 20:03:50 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:05.625 20:03:50 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.625 20:03:50 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:05.626 20:03:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:05.626 20:03:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.626 20:03:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:05.626 20:03:50 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.626 20:03:50 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.626 20:03:50 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.626 20:03:50 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:05.626 20:03:50 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.626 20:03:50 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:05.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.626 --rc genhtml_branch_coverage=1 00:07:05.626 --rc genhtml_function_coverage=1 00:07:05.626 --rc genhtml_legend=1 00:07:05.626 --rc geninfo_all_blocks=1 00:07:05.626 --rc geninfo_unexecuted_blocks=1 00:07:05.626 00:07:05.626 ' 00:07:05.626 20:03:50 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:05.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.626 --rc genhtml_branch_coverage=1 00:07:05.626 --rc genhtml_function_coverage=1 00:07:05.626 --rc genhtml_legend=1 00:07:05.626 --rc geninfo_all_blocks=1 00:07:05.626 --rc geninfo_unexecuted_blocks=1 00:07:05.626 00:07:05.626 ' 00:07:05.626 20:03:50 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:05.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.626 --rc genhtml_branch_coverage=1 00:07:05.626 --rc genhtml_function_coverage=1 00:07:05.626 --rc genhtml_legend=1 00:07:05.626 --rc geninfo_all_blocks=1 00:07:05.626 --rc geninfo_unexecuted_blocks=1 00:07:05.626 00:07:05.626 ' 00:07:05.626 20:03:50 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:05.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.626 --rc genhtml_branch_coverage=1 00:07:05.626 --rc genhtml_function_coverage=1 00:07:05.626 --rc genhtml_legend=1 00:07:05.626 --rc geninfo_all_blocks=1 00:07:05.626 --rc geninfo_unexecuted_blocks=1 00:07:05.626 00:07:05.626 ' 00:07:05.626 20:03:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:05.626 20:03:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58144 00:07:05.626 20:03:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.626 20:03:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:05.626 20:03:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58144 00:07:05.626 20:03:51 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58144 ']' 00:07:05.626 20:03:51 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.626 20:03:51 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.626 20:03:51 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.626 20:03:51 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.626 20:03:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.626 [2024-10-17 20:03:51.112944] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:07:05.626 [2024-10-17 20:03:51.113674] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58144 ] 00:07:05.886 [2024-10-17 20:03:51.291795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.886 [2024-10-17 20:03:51.427174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.886 [2024-10-17 20:03:51.427272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.886 [2024-10-17 20:03:51.427419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.886 [2024-10-17 20:03:51.427436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.825 20:03:52 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.825 20:03:52 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:06.825 20:03:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:06.825 20:03:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.825 20:03:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:06.825 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:06.825 POWER: Cannot set governor of lcore 0 to userspace 00:07:06.825 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:06.825 POWER: Cannot set governor of lcore 0 to performance 00:07:06.825 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:06.825 POWER: Cannot set governor of lcore 0 to userspace 00:07:06.825 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:06.825 POWER: Cannot set governor of lcore 0 to userspace 00:07:06.825 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:06.825 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:06.825 POWER: Unable to set Power Management Environment for lcore 0 00:07:06.825 [2024-10-17 20:03:52.171082] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:06.825 [2024-10-17 20:03:52.171123] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:06.825 [2024-10-17 20:03:52.171138] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:06.825 [2024-10-17 20:03:52.171165] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:06.825 [2024-10-17 20:03:52.171178] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:06.825 [2024-10-17 20:03:52.171193] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:06.825 20:03:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.825 20:03:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:06.825 20:03:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.825 20:03:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.085 [2024-10-17 20:03:52.504524] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:07.085 20:03:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.085 20:03:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:07.085 20:03:52 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.085 20:03:52 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.085 20:03:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.085 ************************************ 00:07:07.085 START TEST scheduler_create_thread 00:07:07.085 ************************************ 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.085 2 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.085 3 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.085 4 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.085 5 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.085 6 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.085 7 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.085 8 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.085 9 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.085 10 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.085 20:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.021 20:03:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.021 20:03:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:08.021 20:03:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:08.021 20:03:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.021 20:03:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.396 ************************************ 00:07:09.396 END TEST scheduler_create_thread 00:07:09.396 ************************************ 00:07:09.396 20:03:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.396 00:07:09.396 real 0m2.140s 00:07:09.396 user 0m0.019s 00:07:09.396 sys 0m0.008s 00:07:09.396 20:03:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.396 20:03:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.396 20:03:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:09.397 20:03:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58144 00:07:09.397 20:03:54 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58144 ']' 00:07:09.397 20:03:54 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58144 00:07:09.397 20:03:54 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:09.397 20:03:54 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.397 20:03:54 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58144 00:07:09.397 killing process with pid 58144 00:07:09.397 20:03:54 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:09.397 20:03:54 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:09.397 20:03:54 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58144' 00:07:09.397 20:03:54 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58144 00:07:09.397 20:03:54 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58144 00:07:09.655 [2024-10-17 20:03:55.138950] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:10.593 00:07:10.593 real 0m5.428s 00:07:10.593 user 0m9.538s 00:07:10.593 sys 0m0.541s 00:07:10.593 ************************************ 00:07:10.593 END TEST event_scheduler 00:07:10.593 ************************************ 00:07:10.593 20:03:56 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.593 20:03:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.852 20:03:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:10.852 20:03:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:10.852 20:03:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.852 20:03:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.852 20:03:56 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.852 ************************************ 00:07:10.852 START TEST app_repeat 00:07:10.852 ************************************ 00:07:10.852 20:03:56 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58250 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.852 Process app_repeat pid: 58250 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58250' 00:07:10.852 spdk_app_start Round 0 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:10.852 20:03:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58250 /var/tmp/spdk-nbd.sock 00:07:10.852 20:03:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58250 ']' 00:07:10.852 20:03:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.852 20:03:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.852 20:03:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.852 20:03:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.852 20:03:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.852 [2024-10-17 20:03:56.368150] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:07:10.852 [2024-10-17 20:03:56.369336] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58250 ] 00:07:11.115 [2024-10-17 20:03:56.564725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.115 [2024-10-17 20:03:56.716770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.115 [2024-10-17 20:03:56.716774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.100 20:03:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.100 20:03:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:12.100 20:03:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.100 Malloc0 00:07:12.358 20:03:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.628 Malloc1 00:07:12.628 20:03:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.628 20:03:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.628 20:03:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.628 20:03:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:12.628 20:03:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.628 20:03:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:12.628 20:03:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.628 20:03:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.629 20:03:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.629 20:03:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:12.629 20:03:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.629 20:03:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:12.629 20:03:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:12.629 20:03:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:12.629 20:03:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.629 20:03:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:12.890 /dev/nbd0 00:07:12.890 20:03:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.890 20:03:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.890 1+0 records in 00:07:12.890 1+0 records out 00:07:12.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386258 s, 10.6 MB/s 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:12.890 20:03:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:12.890 20:03:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.890 20:03:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.890 20:03:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:13.149 /dev/nbd1 00:07:13.149 20:03:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.149 20:03:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.149 1+0 records in 00:07:13.149 1+0 records out 00:07:13.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380396 s, 10.8 MB/s 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.149 20:03:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:13.149 20:03:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.149 20:03:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.149 20:03:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.149 20:03:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.149 20:03:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.408 20:03:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.408 { 00:07:13.408 "nbd_device": "/dev/nbd0", 00:07:13.408 "bdev_name": "Malloc0" 00:07:13.408 }, 00:07:13.408 { 00:07:13.408 "nbd_device": "/dev/nbd1", 00:07:13.408 "bdev_name": "Malloc1" 00:07:13.408 } 00:07:13.408 ]' 00:07:13.408 20:03:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.408 { 00:07:13.408 "nbd_device": "/dev/nbd0", 00:07:13.408 "bdev_name": "Malloc0" 00:07:13.408 }, 00:07:13.408 { 00:07:13.408 "nbd_device": "/dev/nbd1", 00:07:13.408 "bdev_name": "Malloc1" 00:07:13.408 } 00:07:13.408 ]' 00:07:13.408 20:03:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:13.667 /dev/nbd1' 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:13.667 /dev/nbd1' 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:13.667 256+0 records in 00:07:13.667 256+0 records out 00:07:13.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00633863 s, 165 MB/s 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:13.667 256+0 records in 00:07:13.667 256+0 records out 00:07:13.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265799 s, 39.5 MB/s 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:13.667 256+0 records in 00:07:13.667 256+0 records out 00:07:13.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292161 s, 35.9 MB/s 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.667 20:03:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.924 20:03:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.924 20:03:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.924 20:03:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.924 20:03:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.924 20:03:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.924 20:03:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.924 20:03:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.924 20:03:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.924 20:03:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.924 20:03:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.182 20:03:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.182 20:03:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.182 20:03:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.182 20:03:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.182 20:03:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.182 20:03:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.182 20:03:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.182 20:03:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.182 20:03:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.182 20:03:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.182 20:03:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.440 20:03:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:14.440 20:03:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.440 20:03:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:14.440 20:04:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:14.440 20:04:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:14.440 20:04:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.440 20:04:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:14.440 20:04:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:14.440 20:04:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:14.440 20:04:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:14.440 20:04:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:14.440 20:04:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:14.440 20:04:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:15.006 20:04:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:15.940 [2024-10-17 20:04:01.530726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.198 [2024-10-17 20:04:01.659897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.198 [2024-10-17 20:04:01.659925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.457 [2024-10-17 20:04:01.854419] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:16.457 [2024-10-17 20:04:01.854564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:18.356 20:04:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:18.356 spdk_app_start Round 1 00:07:18.356 20:04:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:18.356 20:04:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58250 /var/tmp/spdk-nbd.sock 00:07:18.356 20:04:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58250 ']' 00:07:18.356 20:04:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.356 20:04:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.356 20:04:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.356 20:04:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.356 20:04:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.356 20:04:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.356 20:04:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:18.356 20:04:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:18.614 Malloc0 00:07:18.614 20:04:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:18.873 Malloc1 00:07:18.873 20:04:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.873 20:04:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:19.132 /dev/nbd0 00:07:19.132 20:04:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.132 20:04:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.132 1+0 records in 00:07:19.132 1+0 records out 00:07:19.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233903 s, 17.5 MB/s 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:19.132 20:04:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:19.132 20:04:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.132 20:04:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.132 20:04:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:19.390 /dev/nbd1 00:07:19.390 20:04:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:19.390 20:04:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:19.390 20:04:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:19.390 20:04:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:19.390 20:04:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:19.390 20:04:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:19.390 20:04:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:19.390 20:04:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:19.390 20:04:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:19.390 20:04:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:19.390 20:04:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.390 1+0 records in 00:07:19.390 1+0 records out 00:07:19.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028927 s, 14.2 MB/s 00:07:19.649 20:04:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.649 20:04:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:19.649 20:04:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.649 20:04:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:19.649 20:04:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:19.649 20:04:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.649 20:04:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.649 20:04:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.649 20:04:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.649 20:04:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:19.908 { 00:07:19.908 "nbd_device": "/dev/nbd0", 00:07:19.908 "bdev_name": "Malloc0" 00:07:19.908 }, 00:07:19.908 { 00:07:19.908 "nbd_device": "/dev/nbd1", 00:07:19.908 "bdev_name": "Malloc1" 00:07:19.908 } 00:07:19.908 ]' 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:19.908 { 00:07:19.908 "nbd_device": "/dev/nbd0", 00:07:19.908 "bdev_name": "Malloc0" 00:07:19.908 }, 00:07:19.908 { 00:07:19.908 "nbd_device": "/dev/nbd1", 00:07:19.908 "bdev_name": "Malloc1" 00:07:19.908 } 00:07:19.908 ]' 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:19.908 /dev/nbd1' 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:19.908 /dev/nbd1' 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:19.908 256+0 records in 00:07:19.908 256+0 records out 00:07:19.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639857 s, 164 MB/s 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:19.908 256+0 records in 00:07:19.908 256+0 records out 00:07:19.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023945 s, 43.8 MB/s 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:19.908 256+0 records in 00:07:19.908 256+0 records out 00:07:19.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0390456 s, 26.9 MB/s 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:19.908 20:04:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:19.909 20:04:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:19.909 20:04:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.909 20:04:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.909 20:04:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:19.909 20:04:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:19.909 20:04:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.909 20:04:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.167 20:04:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.167 20:04:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.167 20:04:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.167 20:04:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.167 20:04:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.167 20:04:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.167 20:04:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.167 20:04:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.167 20:04:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.167 20:04:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.425 20:04:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.684 20:04:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.684 20:04:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.684 20:04:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.684 20:04:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.684 20:04:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.684 20:04:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.684 20:04:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.684 20:04:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.684 20:04:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.684 20:04:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.942 20:04:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:20.942 20:04:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:20.942 20:04:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.942 20:04:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:20.942 20:04:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:20.942 20:04:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.942 20:04:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:20.942 20:04:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:20.942 20:04:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:20.942 20:04:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:20.942 20:04:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:20.942 20:04:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:20.942 20:04:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:21.508 20:04:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:22.439 [2024-10-17 20:04:07.931613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.439 [2024-10-17 20:04:08.062544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.439 [2024-10-17 20:04:08.062548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.697 [2024-10-17 20:04:08.253692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:22.697 [2024-10-17 20:04:08.253819] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:24.638 spdk_app_start Round 2 00:07:24.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:24.638 20:04:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:24.638 20:04:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:24.638 20:04:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58250 /var/tmp/spdk-nbd.sock 00:07:24.638 20:04:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58250 ']' 00:07:24.638 20:04:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:24.638 20:04:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.638 20:04:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:24.638 20:04:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.638 20:04:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:24.638 20:04:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.638 20:04:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:24.638 20:04:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:24.896 Malloc0 00:07:25.154 20:04:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:25.412 Malloc1 00:07:25.412 20:04:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.412 20:04:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:25.670 /dev/nbd0 00:07:25.670 20:04:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:25.670 20:04:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.670 1+0 records in 00:07:25.670 1+0 records out 00:07:25.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247543 s, 16.5 MB/s 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.670 20:04:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:25.670 20:04:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.670 20:04:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.671 20:04:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:25.928 /dev/nbd1 00:07:25.928 20:04:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:25.928 20:04:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.928 1+0 records in 00:07:25.928 1+0 records out 00:07:25.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362775 s, 11.3 MB/s 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.928 20:04:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:25.928 20:04:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.928 20:04:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.928 20:04:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.928 20:04:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.928 20:04:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:26.186 { 00:07:26.186 "nbd_device": "/dev/nbd0", 00:07:26.186 "bdev_name": "Malloc0" 00:07:26.186 }, 00:07:26.186 { 00:07:26.186 "nbd_device": "/dev/nbd1", 00:07:26.186 "bdev_name": "Malloc1" 00:07:26.186 } 00:07:26.186 ]' 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:26.186 { 00:07:26.186 "nbd_device": "/dev/nbd0", 00:07:26.186 "bdev_name": "Malloc0" 00:07:26.186 }, 00:07:26.186 { 00:07:26.186 "nbd_device": "/dev/nbd1", 00:07:26.186 "bdev_name": "Malloc1" 00:07:26.186 } 00:07:26.186 ]' 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:26.186 /dev/nbd1' 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:26.186 /dev/nbd1' 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:26.186 20:04:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:26.444 256+0 records in 00:07:26.444 256+0 records out 00:07:26.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00926902 s, 113 MB/s 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:26.444 256+0 records in 00:07:26.444 256+0 records out 00:07:26.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237701 s, 44.1 MB/s 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:26.444 256+0 records in 00:07:26.444 256+0 records out 00:07:26.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315968 s, 33.2 MB/s 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.444 20:04:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:26.445 20:04:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.445 20:04:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:26.703 20:04:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:26.703 20:04:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:26.703 20:04:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:26.703 20:04:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.703 20:04:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.703 20:04:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:26.703 20:04:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:26.703 20:04:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.703 20:04:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.703 20:04:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:26.962 20:04:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:26.962 20:04:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:26.962 20:04:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:26.962 20:04:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.962 20:04:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.962 20:04:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:26.962 20:04:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:26.962 20:04:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.962 20:04:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.962 20:04:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.962 20:04:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.220 20:04:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:27.220 20:04:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.220 20:04:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.220 20:04:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:27.220 20:04:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.220 20:04:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:27.220 20:04:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:27.220 20:04:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.220 20:04:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.220 20:04:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:27.220 20:04:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:27.220 20:04:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:27.220 20:04:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:27.787 20:04:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:28.722 [2024-10-17 20:04:14.333936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.981 [2024-10-17 20:04:14.466388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.981 [2024-10-17 20:04:14.466401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.238 [2024-10-17 20:04:14.648663] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:29.238 [2024-10-17 20:04:14.648909] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:31.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:31.136 20:04:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58250 /var/tmp/spdk-nbd.sock 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58250 ']' 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:31.136 20:04:16 event.app_repeat -- event/event.sh@39 -- # killprocess 58250 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58250 ']' 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58250 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58250 00:07:31.136 killing process with pid 58250 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58250' 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58250 00:07:31.136 20:04:16 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58250 00:07:32.071 spdk_app_start is called in Round 0. 00:07:32.071 Shutdown signal received, stop current app iteration 00:07:32.071 Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 reinitialization... 00:07:32.071 spdk_app_start is called in Round 1. 00:07:32.071 Shutdown signal received, stop current app iteration 00:07:32.071 Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 reinitialization... 00:07:32.071 spdk_app_start is called in Round 2. 00:07:32.071 Shutdown signal received, stop current app iteration 00:07:32.071 Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 reinitialization... 00:07:32.071 spdk_app_start is called in Round 3. 00:07:32.071 Shutdown signal received, stop current app iteration 00:07:32.071 20:04:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:32.071 20:04:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:32.071 00:07:32.071 real 0m21.266s 00:07:32.071 user 0m46.939s 00:07:32.071 sys 0m3.023s 00:07:32.071 20:04:17 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.071 ************************************ 00:07:32.071 END TEST app_repeat 00:07:32.071 ************************************ 00:07:32.071 20:04:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:32.071 20:04:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:32.071 20:04:17 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:32.071 20:04:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.071 20:04:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.071 20:04:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:32.071 ************************************ 00:07:32.071 START TEST cpu_locks 00:07:32.071 ************************************ 00:07:32.071 20:04:17 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:32.071 * Looking for test storage... 00:07:32.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:32.071 20:04:17 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:32.071 20:04:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:32.071 20:04:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:32.329 20:04:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.329 20:04:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:32.329 20:04:17 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.329 20:04:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:32.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.329 --rc genhtml_branch_coverage=1 00:07:32.329 --rc genhtml_function_coverage=1 00:07:32.329 --rc genhtml_legend=1 00:07:32.329 --rc geninfo_all_blocks=1 00:07:32.329 --rc geninfo_unexecuted_blocks=1 00:07:32.329 00:07:32.329 ' 00:07:32.329 20:04:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:32.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.329 --rc genhtml_branch_coverage=1 00:07:32.329 --rc genhtml_function_coverage=1 00:07:32.329 --rc genhtml_legend=1 00:07:32.329 --rc geninfo_all_blocks=1 00:07:32.329 --rc geninfo_unexecuted_blocks=1 00:07:32.329 00:07:32.329 ' 00:07:32.329 20:04:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:32.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.329 --rc genhtml_branch_coverage=1 00:07:32.329 --rc genhtml_function_coverage=1 00:07:32.329 --rc genhtml_legend=1 00:07:32.329 --rc geninfo_all_blocks=1 00:07:32.329 --rc geninfo_unexecuted_blocks=1 00:07:32.329 00:07:32.329 ' 00:07:32.329 20:04:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:32.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.329 --rc genhtml_branch_coverage=1 00:07:32.329 --rc genhtml_function_coverage=1 00:07:32.329 --rc genhtml_legend=1 00:07:32.329 --rc geninfo_all_blocks=1 00:07:32.329 --rc geninfo_unexecuted_blocks=1 00:07:32.329 00:07:32.329 ' 00:07:32.330 20:04:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:32.330 20:04:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:32.330 20:04:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:32.330 20:04:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:32.330 20:04:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.330 20:04:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.330 20:04:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.330 ************************************ 00:07:32.330 START TEST default_locks 00:07:32.330 ************************************ 00:07:32.330 20:04:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:32.330 20:04:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58719 00:07:32.330 20:04:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.330 20:04:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58719 00:07:32.330 20:04:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58719 ']' 00:07:32.330 20:04:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.330 20:04:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.330 20:04:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.330 20:04:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.330 20:04:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.330 [2024-10-17 20:04:17.917122] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:07:32.330 [2024-10-17 20:04:17.917800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58719 ] 00:07:32.588 [2024-10-17 20:04:18.082472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.588 [2024-10-17 20:04:18.215841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.554 20:04:19 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.554 20:04:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:33.554 20:04:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58719 00:07:33.554 20:04:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58719 00:07:33.554 20:04:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:34.120 20:04:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58719 00:07:34.120 20:04:19 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58719 ']' 00:07:34.120 20:04:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58719 00:07:34.120 20:04:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:34.120 20:04:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.120 20:04:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58719 00:07:34.120 20:04:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.120 20:04:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.120 killing process with pid 58719 00:07:34.120 20:04:19 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58719' 00:07:34.120 20:04:19 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58719 00:07:34.120 20:04:19 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58719 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58719 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58719 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58719 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58719 ']' 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.653 ERROR: process (pid: 58719) is no longer running 00:07:36.653 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58719) - No such process 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:36.653 00:07:36.653 real 0m3.958s 00:07:36.653 user 0m3.935s 00:07:36.653 sys 0m0.747s 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.653 20:04:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.653 ************************************ 00:07:36.653 END TEST default_locks 00:07:36.653 ************************************ 00:07:36.653 20:04:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:36.653 20:04:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.653 20:04:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.653 20:04:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.653 ************************************ 00:07:36.653 START TEST default_locks_via_rpc 00:07:36.653 ************************************ 00:07:36.653 20:04:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:36.653 20:04:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58794 00:07:36.653 20:04:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58794 00:07:36.653 20:04:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:36.653 20:04:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58794 ']' 00:07:36.653 20:04:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.653 20:04:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.653 20:04:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.653 20:04:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.653 20:04:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.653 [2024-10-17 20:04:21.939173] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:07:36.653 [2024-10-17 20:04:21.939368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58794 ] 00:07:36.653 [2024-10-17 20:04:22.113033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.653 [2024-10-17 20:04:22.245322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58794 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58794 00:07:37.594 20:04:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.160 20:04:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58794 00:07:38.160 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58794 ']' 00:07:38.160 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58794 00:07:38.160 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:38.160 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.160 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58794 00:07:38.160 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.160 killing process with pid 58794 00:07:38.160 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.160 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58794' 00:07:38.160 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58794 00:07:38.160 20:04:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58794 00:07:40.689 00:07:40.689 real 0m3.935s 00:07:40.689 user 0m3.955s 00:07:40.689 sys 0m0.705s 00:07:40.689 20:04:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.689 20:04:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.689 ************************************ 00:07:40.689 END TEST default_locks_via_rpc 00:07:40.689 ************************************ 00:07:40.689 20:04:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:40.689 20:04:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.689 20:04:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.689 20:04:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.689 ************************************ 00:07:40.689 START TEST non_locking_app_on_locked_coremask 00:07:40.689 ************************************ 00:07:40.689 20:04:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:40.689 20:04:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58868 00:07:40.689 20:04:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.689 20:04:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58868 /var/tmp/spdk.sock 00:07:40.689 20:04:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58868 ']' 00:07:40.689 20:04:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.689 20:04:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.689 20:04:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.689 20:04:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.689 20:04:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.689 [2024-10-17 20:04:25.904092] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:07:40.689 [2024-10-17 20:04:25.904271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58868 ] 00:07:40.689 [2024-10-17 20:04:26.068679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.689 [2024-10-17 20:04:26.200605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.623 20:04:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.623 20:04:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:41.623 20:04:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58884 00:07:41.623 20:04:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58884 /var/tmp/spdk2.sock 00:07:41.623 20:04:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:41.623 20:04:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58884 ']' 00:07:41.623 20:04:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.623 20:04:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.623 20:04:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.623 20:04:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.623 20:04:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.623 [2024-10-17 20:04:27.187891] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:07:41.623 [2024-10-17 20:04:27.188082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58884 ] 00:07:41.880 [2024-10-17 20:04:27.370684] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.880 [2024-10-17 20:04:27.370759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.138 [2024-10-17 20:04:27.627418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.669 20:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.669 20:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:44.669 20:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58868 00:07:44.669 20:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58868 00:07:44.669 20:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:45.234 20:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58868 00:07:45.234 20:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58868 ']' 00:07:45.234 20:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58868 00:07:45.234 20:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:45.234 20:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.234 20:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58868 00:07:45.234 20:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.234 20:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.234 killing process with pid 58868 00:07:45.234 20:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58868' 00:07:45.234 20:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58868 00:07:45.234 20:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58868 00:07:49.415 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58884 00:07:49.415 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58884 ']' 00:07:49.415 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58884 00:07:49.415 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:49.415 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.415 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58884 00:07:49.415 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.415 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.415 killing process with pid 58884 00:07:49.415 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58884' 00:07:49.415 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58884 00:07:49.415 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58884 00:07:51.946 00:07:51.946 real 0m11.243s 00:07:51.946 user 0m11.801s 00:07:51.946 sys 0m1.516s 00:07:51.946 20:04:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.946 20:04:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.946 ************************************ 00:07:51.946 END TEST non_locking_app_on_locked_coremask 00:07:51.946 ************************************ 00:07:51.946 20:04:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:51.946 20:04:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.946 20:04:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.946 20:04:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.946 ************************************ 00:07:51.946 START TEST locking_app_on_unlocked_coremask 00:07:51.946 ************************************ 00:07:51.946 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:51.946 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59032 00:07:51.946 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59032 /var/tmp/spdk.sock 00:07:51.946 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59032 ']' 00:07:51.946 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.946 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:51.946 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.946 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.946 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.946 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.946 [2024-10-17 20:04:37.226551] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:07:51.946 [2024-10-17 20:04:37.226741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59032 ] 00:07:51.946 [2024-10-17 20:04:37.403916] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:51.946 [2024-10-17 20:04:37.404022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.946 [2024-10-17 20:04:37.541697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.881 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.881 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:52.881 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59053 00:07:52.881 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59053 /var/tmp/spdk2.sock 00:07:52.881 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59053 ']' 00:07:52.881 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:52.881 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:52.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:52.881 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.881 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:52.881 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.881 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.881 [2024-10-17 20:04:38.506635] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:07:52.881 [2024-10-17 20:04:38.506861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59053 ] 00:07:53.140 [2024-10-17 20:04:38.685724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.398 [2024-10-17 20:04:38.942281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.926 20:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.926 20:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:55.926 20:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59053 00:07:55.926 20:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59053 00:07:55.926 20:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:56.492 20:04:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59032 00:07:56.492 20:04:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59032 ']' 00:07:56.492 20:04:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59032 00:07:56.492 20:04:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:56.492 20:04:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.492 20:04:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59032 00:07:56.492 20:04:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.492 20:04:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.492 killing process with pid 59032 00:07:56.492 20:04:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59032' 00:07:56.492 20:04:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59032 00:07:56.492 20:04:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59032 00:08:01.756 20:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59053 00:08:01.756 20:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59053 ']' 00:08:01.756 20:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59053 00:08:01.756 20:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:01.756 20:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.756 20:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59053 00:08:01.756 20:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:01.756 20:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:01.756 20:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59053' 00:08:01.756 killing process with pid 59053 00:08:01.756 20:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59053 00:08:01.756 20:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59053 00:08:03.129 00:08:03.129 real 0m11.467s 00:08:03.129 user 0m11.962s 00:08:03.129 sys 0m1.588s 00:08:03.129 20:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.129 20:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.129 ************************************ 00:08:03.129 END TEST locking_app_on_unlocked_coremask 00:08:03.129 ************************************ 00:08:03.129 20:04:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:03.129 20:04:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.129 20:04:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.129 20:04:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.129 ************************************ 00:08:03.129 START TEST locking_app_on_locked_coremask 00:08:03.129 ************************************ 00:08:03.129 20:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:03.129 20:04:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59198 00:08:03.129 20:04:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59198 /var/tmp/spdk.sock 00:08:03.129 20:04:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:03.129 20:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59198 ']' 00:08:03.129 20:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.129 20:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.129 20:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.129 20:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.129 20:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.388 [2024-10-17 20:04:48.793878] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:03.388 [2024-10-17 20:04:48.794090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59198 ] 00:08:03.388 [2024-10-17 20:04:48.972002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.646 [2024-10-17 20:04:49.099022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59219 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59219 /var/tmp/spdk2.sock 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59219 /var/tmp/spdk2.sock 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59219 /var/tmp/spdk2.sock 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59219 ']' 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.581 20:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.581 [2024-10-17 20:04:50.071224] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:04.581 [2024-10-17 20:04:50.071407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59219 ] 00:08:04.878 [2024-10-17 20:04:50.250673] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59198 has claimed it. 00:08:04.878 [2024-10-17 20:04:50.250760] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:05.137 ERROR: process (pid: 59219) is no longer running 00:08:05.137 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59219) - No such process 00:08:05.137 20:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.137 20:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:05.137 20:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:05.137 20:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.137 20:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:05.137 20:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.137 20:04:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59198 00:08:05.137 20:04:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59198 00:08:05.137 20:04:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:05.703 20:04:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59198 00:08:05.703 20:04:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59198 ']' 00:08:05.703 20:04:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59198 00:08:05.703 20:04:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:05.703 20:04:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.703 20:04:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59198 00:08:05.703 20:04:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:05.703 20:04:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:05.703 killing process with pid 59198 00:08:05.703 20:04:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59198' 00:08:05.703 20:04:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59198 00:08:05.703 20:04:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59198 00:08:08.233 00:08:08.233 real 0m4.732s 00:08:08.233 user 0m5.077s 00:08:08.233 sys 0m0.938s 00:08:08.233 20:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.233 20:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.233 ************************************ 00:08:08.233 END TEST locking_app_on_locked_coremask 00:08:08.233 ************************************ 00:08:08.233 20:04:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:08.233 20:04:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.233 20:04:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.233 20:04:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:08.233 ************************************ 00:08:08.233 START TEST locking_overlapped_coremask 00:08:08.233 ************************************ 00:08:08.233 20:04:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:08.233 20:04:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59288 00:08:08.233 20:04:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59288 /var/tmp/spdk.sock 00:08:08.233 20:04:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:08.233 20:04:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59288 ']' 00:08:08.233 20:04:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.233 20:04:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.233 20:04:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.233 20:04:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.233 20:04:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.233 [2024-10-17 20:04:53.529622] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:08.233 [2024-10-17 20:04:53.529819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59288 ] 00:08:08.233 [2024-10-17 20:04:53.705712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:08.233 [2024-10-17 20:04:53.830826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.233 [2024-10-17 20:04:53.830904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.233 [2024-10-17 20:04:53.830911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59307 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59307 /var/tmp/spdk2.sock 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59307 /var/tmp/spdk2.sock 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59307 /var/tmp/spdk2.sock 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59307 ']' 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.167 20:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.167 [2024-10-17 20:04:54.766706] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:09.167 [2024-10-17 20:04:54.766893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59307 ] 00:08:09.425 [2024-10-17 20:04:54.939792] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59288 has claimed it. 00:08:09.425 [2024-10-17 20:04:54.939865] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:09.991 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59307) - No such process 00:08:09.991 ERROR: process (pid: 59307) is no longer running 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59288 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59288 ']' 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59288 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59288 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.991 killing process with pid 59288 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59288' 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59288 00:08:09.991 20:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59288 00:08:12.520 00:08:12.520 real 0m4.240s 00:08:12.520 user 0m11.508s 00:08:12.520 sys 0m0.651s 00:08:12.520 20:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.520 20:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.520 ************************************ 00:08:12.520 END TEST locking_overlapped_coremask 00:08:12.520 ************************************ 00:08:12.520 20:04:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:12.520 20:04:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:12.520 20:04:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.520 20:04:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:12.520 ************************************ 00:08:12.520 START TEST locking_overlapped_coremask_via_rpc 00:08:12.520 ************************************ 00:08:12.520 20:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:12.520 20:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59371 00:08:12.520 20:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59371 /var/tmp/spdk.sock 00:08:12.520 20:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59371 ']' 00:08:12.520 20:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:12.520 20:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.520 20:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.520 20:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.520 20:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.520 20:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.520 [2024-10-17 20:04:57.822450] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:12.521 [2024-10-17 20:04:57.823344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59371 ] 00:08:12.521 [2024-10-17 20:04:58.001955] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:12.521 [2024-10-17 20:04:58.002263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.521 [2024-10-17 20:04:58.126939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.521 [2024-10-17 20:04:58.127077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.521 [2024-10-17 20:04:58.127099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.454 20:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.455 20:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:13.455 20:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59389 00:08:13.455 20:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:13.455 20:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59389 /var/tmp/spdk2.sock 00:08:13.455 20:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59389 ']' 00:08:13.455 20:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:13.455 20:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:13.455 20:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:13.455 20:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.455 20:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.455 [2024-10-17 20:04:59.087477] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:13.455 [2024-10-17 20:04:59.087643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59389 ] 00:08:13.715 [2024-10-17 20:04:59.258769] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:13.715 [2024-10-17 20:04:59.258823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:13.977 [2024-10-17 20:04:59.521668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.977 [2024-10-17 20:04:59.521693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.977 [2024-10-17 20:04:59.521721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.508 [2024-10-17 20:05:01.851223] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59371 has claimed it. 00:08:16.508 request: 00:08:16.508 { 00:08:16.508 "method": "framework_enable_cpumask_locks", 00:08:16.508 "req_id": 1 00:08:16.508 } 00:08:16.508 Got JSON-RPC error response 00:08:16.508 response: 00:08:16.508 { 00:08:16.508 "code": -32603, 00:08:16.508 "message": "Failed to claim CPU core: 2" 00:08:16.508 } 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59371 /var/tmp/spdk.sock 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59371 ']' 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.508 20:05:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.508 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.508 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:16.508 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59389 /var/tmp/spdk2.sock 00:08:16.508 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59389 ']' 00:08:16.508 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:16.508 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:16.508 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:16.508 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.508 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.074 ************************************ 00:08:17.074 END TEST locking_overlapped_coremask_via_rpc 00:08:17.074 ************************************ 00:08:17.074 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.074 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:17.074 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:17.074 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:17.074 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:17.074 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:17.074 00:08:17.074 real 0m4.748s 00:08:17.074 user 0m1.738s 00:08:17.074 sys 0m0.231s 00:08:17.074 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.074 20:05:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.074 20:05:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:17.074 20:05:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59371 ]] 00:08:17.074 20:05:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59371 00:08:17.074 20:05:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59371 ']' 00:08:17.074 20:05:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59371 00:08:17.074 20:05:02 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:17.074 20:05:02 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.074 20:05:02 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59371 00:08:17.074 20:05:02 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:17.074 killing process with pid 59371 00:08:17.074 20:05:02 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:17.074 20:05:02 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59371' 00:08:17.074 20:05:02 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59371 00:08:17.074 20:05:02 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59371 00:08:19.650 20:05:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59389 ]] 00:08:19.650 20:05:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59389 00:08:19.650 20:05:04 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59389 ']' 00:08:19.650 20:05:04 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59389 00:08:19.650 20:05:04 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:19.650 20:05:04 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.650 20:05:04 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59389 00:08:19.650 20:05:04 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:19.650 20:05:04 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:19.650 killing process with pid 59389 00:08:19.650 20:05:04 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59389' 00:08:19.650 20:05:04 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59389 00:08:19.650 20:05:04 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59389 00:08:21.550 20:05:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:21.550 20:05:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:21.550 20:05:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59371 ]] 00:08:21.550 20:05:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59371 00:08:21.550 20:05:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59371 ']' 00:08:21.550 20:05:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59371 00:08:21.550 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59371) - No such process 00:08:21.550 Process with pid 59371 is not found 00:08:21.550 20:05:06 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59371 is not found' 00:08:21.550 20:05:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59389 ]] 00:08:21.550 20:05:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59389 00:08:21.550 20:05:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59389 ']' 00:08:21.550 20:05:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59389 00:08:21.550 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59389) - No such process 00:08:21.550 Process with pid 59389 is not found 00:08:21.550 20:05:06 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59389 is not found' 00:08:21.550 20:05:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:21.550 00:08:21.550 real 0m49.354s 00:08:21.550 user 1m25.659s 00:08:21.550 sys 0m7.628s 00:08:21.550 20:05:06 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.550 20:05:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:21.550 ************************************ 00:08:21.550 END TEST cpu_locks 00:08:21.550 ************************************ 00:08:21.550 ************************************ 00:08:21.550 END TEST event 00:08:21.550 ************************************ 00:08:21.550 00:08:21.550 real 1m21.297s 00:08:21.550 user 2m29.364s 00:08:21.550 sys 0m11.830s 00:08:21.550 20:05:07 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.550 20:05:07 event -- common/autotest_common.sh@10 -- # set +x 00:08:21.550 20:05:07 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:21.550 20:05:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.550 20:05:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.550 20:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:21.550 ************************************ 00:08:21.550 START TEST thread 00:08:21.550 ************************************ 00:08:21.550 20:05:07 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:21.550 * Looking for test storage... 00:08:21.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:21.550 20:05:07 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:21.550 20:05:07 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:21.550 20:05:07 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:21.812 20:05:07 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:21.812 20:05:07 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.812 20:05:07 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.812 20:05:07 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.812 20:05:07 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.812 20:05:07 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.812 20:05:07 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.812 20:05:07 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.812 20:05:07 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.812 20:05:07 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.812 20:05:07 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.812 20:05:07 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.812 20:05:07 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:21.812 20:05:07 thread -- scripts/common.sh@345 -- # : 1 00:08:21.812 20:05:07 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.812 20:05:07 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.812 20:05:07 thread -- scripts/common.sh@365 -- # decimal 1 00:08:21.812 20:05:07 thread -- scripts/common.sh@353 -- # local d=1 00:08:21.812 20:05:07 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.812 20:05:07 thread -- scripts/common.sh@355 -- # echo 1 00:08:21.812 20:05:07 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.812 20:05:07 thread -- scripts/common.sh@366 -- # decimal 2 00:08:21.812 20:05:07 thread -- scripts/common.sh@353 -- # local d=2 00:08:21.812 20:05:07 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.812 20:05:07 thread -- scripts/common.sh@355 -- # echo 2 00:08:21.812 20:05:07 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.812 20:05:07 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.812 20:05:07 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.812 20:05:07 thread -- scripts/common.sh@368 -- # return 0 00:08:21.812 20:05:07 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.812 20:05:07 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:21.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.812 --rc genhtml_branch_coverage=1 00:08:21.812 --rc genhtml_function_coverage=1 00:08:21.812 --rc genhtml_legend=1 00:08:21.812 --rc geninfo_all_blocks=1 00:08:21.812 --rc geninfo_unexecuted_blocks=1 00:08:21.812 00:08:21.812 ' 00:08:21.812 20:05:07 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:21.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.812 --rc genhtml_branch_coverage=1 00:08:21.812 --rc genhtml_function_coverage=1 00:08:21.812 --rc genhtml_legend=1 00:08:21.812 --rc geninfo_all_blocks=1 00:08:21.812 --rc geninfo_unexecuted_blocks=1 00:08:21.812 00:08:21.812 ' 00:08:21.812 20:05:07 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:21.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.812 --rc genhtml_branch_coverage=1 00:08:21.812 --rc genhtml_function_coverage=1 00:08:21.812 --rc genhtml_legend=1 00:08:21.812 --rc geninfo_all_blocks=1 00:08:21.812 --rc geninfo_unexecuted_blocks=1 00:08:21.812 00:08:21.812 ' 00:08:21.812 20:05:07 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:21.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.812 --rc genhtml_branch_coverage=1 00:08:21.812 --rc genhtml_function_coverage=1 00:08:21.812 --rc genhtml_legend=1 00:08:21.812 --rc geninfo_all_blocks=1 00:08:21.812 --rc geninfo_unexecuted_blocks=1 00:08:21.812 00:08:21.812 ' 00:08:21.812 20:05:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:21.812 20:05:07 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:21.812 20:05:07 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.812 20:05:07 thread -- common/autotest_common.sh@10 -- # set +x 00:08:21.812 ************************************ 00:08:21.812 START TEST thread_poller_perf 00:08:21.812 ************************************ 00:08:21.812 20:05:07 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:21.812 [2024-10-17 20:05:07.298787] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:21.812 [2024-10-17 20:05:07.298961] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59584 ] 00:08:22.074 [2024-10-17 20:05:07.470492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.074 [2024-10-17 20:05:07.604473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.074 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:23.446 [2024-10-17T20:05:09.100Z] ====================================== 00:08:23.446 [2024-10-17T20:05:09.100Z] busy:2212804234 (cyc) 00:08:23.446 [2024-10-17T20:05:09.100Z] total_run_count: 304000 00:08:23.446 [2024-10-17T20:05:09.100Z] tsc_hz: 2200000000 (cyc) 00:08:23.446 [2024-10-17T20:05:09.100Z] ====================================== 00:08:23.446 [2024-10-17T20:05:09.100Z] poller_cost: 7278 (cyc), 3308 (nsec) 00:08:23.446 00:08:23.446 real 0m1.598s 00:08:23.446 user 0m1.394s 00:08:23.446 sys 0m0.095s 00:08:23.446 20:05:08 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.446 20:05:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:23.446 ************************************ 00:08:23.446 END TEST thread_poller_perf 00:08:23.446 ************************************ 00:08:23.446 20:05:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:23.446 20:05:08 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:23.446 20:05:08 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.446 20:05:08 thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.446 ************************************ 00:08:23.446 START TEST thread_poller_perf 00:08:23.446 ************************************ 00:08:23.446 20:05:08 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:23.446 [2024-10-17 20:05:08.949631] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:23.446 [2024-10-17 20:05:08.950340] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59626 ] 00:08:23.704 [2024-10-17 20:05:09.124462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.704 [2024-10-17 20:05:09.248475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.704 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:25.077 [2024-10-17T20:05:10.731Z] ====================================== 00:08:25.077 [2024-10-17T20:05:10.731Z] busy:2204128108 (cyc) 00:08:25.077 [2024-10-17T20:05:10.731Z] total_run_count: 3990000 00:08:25.077 [2024-10-17T20:05:10.731Z] tsc_hz: 2200000000 (cyc) 00:08:25.077 [2024-10-17T20:05:10.731Z] ====================================== 00:08:25.077 [2024-10-17T20:05:10.731Z] poller_cost: 552 (cyc), 250 (nsec) 00:08:25.077 00:08:25.077 real 0m1.566s 00:08:25.077 user 0m1.354s 00:08:25.077 sys 0m0.101s 00:08:25.077 20:05:10 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.077 20:05:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:25.077 ************************************ 00:08:25.077 END TEST thread_poller_perf 00:08:25.077 ************************************ 00:08:25.077 20:05:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:25.077 00:08:25.077 real 0m3.459s 00:08:25.077 user 0m2.901s 00:08:25.077 sys 0m0.336s 00:08:25.077 20:05:10 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.077 20:05:10 thread -- common/autotest_common.sh@10 -- # set +x 00:08:25.077 ************************************ 00:08:25.077 END TEST thread 00:08:25.077 ************************************ 00:08:25.077 20:05:10 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:25.077 20:05:10 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:25.077 20:05:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.077 20:05:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.077 20:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:25.077 ************************************ 00:08:25.077 START TEST app_cmdline 00:08:25.077 ************************************ 00:08:25.077 20:05:10 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:25.077 * Looking for test storage... 00:08:25.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:25.077 20:05:10 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:25.077 20:05:10 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:25.077 20:05:10 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:25.356 20:05:10 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.356 20:05:10 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:25.356 20:05:10 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.356 20:05:10 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:25.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.356 --rc genhtml_branch_coverage=1 00:08:25.356 --rc genhtml_function_coverage=1 00:08:25.356 --rc genhtml_legend=1 00:08:25.356 --rc geninfo_all_blocks=1 00:08:25.356 --rc geninfo_unexecuted_blocks=1 00:08:25.356 00:08:25.356 ' 00:08:25.356 20:05:10 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:25.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.356 --rc genhtml_branch_coverage=1 00:08:25.357 --rc genhtml_function_coverage=1 00:08:25.357 --rc genhtml_legend=1 00:08:25.357 --rc geninfo_all_blocks=1 00:08:25.357 --rc geninfo_unexecuted_blocks=1 00:08:25.357 00:08:25.357 ' 00:08:25.357 20:05:10 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:25.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.357 --rc genhtml_branch_coverage=1 00:08:25.357 --rc genhtml_function_coverage=1 00:08:25.357 --rc genhtml_legend=1 00:08:25.357 --rc geninfo_all_blocks=1 00:08:25.357 --rc geninfo_unexecuted_blocks=1 00:08:25.357 00:08:25.357 ' 00:08:25.357 20:05:10 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:25.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.357 --rc genhtml_branch_coverage=1 00:08:25.357 --rc genhtml_function_coverage=1 00:08:25.357 --rc genhtml_legend=1 00:08:25.357 --rc geninfo_all_blocks=1 00:08:25.357 --rc geninfo_unexecuted_blocks=1 00:08:25.357 00:08:25.357 ' 00:08:25.357 20:05:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:25.357 20:05:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59704 00:08:25.357 20:05:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59704 00:08:25.357 20:05:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:25.357 20:05:10 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59704 ']' 00:08:25.357 20:05:10 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.357 20:05:10 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.357 20:05:10 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.357 20:05:10 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.357 20:05:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:25.357 [2024-10-17 20:05:10.881177] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:25.357 [2024-10-17 20:05:10.881828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59704 ] 00:08:25.616 [2024-10-17 20:05:11.051999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.616 [2024-10-17 20:05:11.178395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.553 20:05:12 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.553 20:05:12 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:26.553 20:05:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:26.811 { 00:08:26.811 "version": "SPDK v25.01-pre git sha1 5c4ed23c8", 00:08:26.811 "fields": { 00:08:26.811 "major": 25, 00:08:26.811 "minor": 1, 00:08:26.811 "patch": 0, 00:08:26.811 "suffix": "-pre", 00:08:26.811 "commit": "5c4ed23c8" 00:08:26.811 } 00:08:26.811 } 00:08:26.811 20:05:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:26.811 20:05:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:26.811 20:05:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:26.811 20:05:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:26.811 20:05:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:26.811 20:05:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:26.811 20:05:12 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.811 20:05:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:26.811 20:05:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:26.811 20:05:12 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.811 20:05:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:26.811 20:05:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:26.811 20:05:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.811 20:05:12 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:26.811 20:05:12 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.811 20:05:12 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.811 20:05:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.811 20:05:12 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.811 20:05:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.811 20:05:12 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.811 20:05:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.811 20:05:12 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.812 20:05:12 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:26.812 20:05:12 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:27.070 request: 00:08:27.070 { 00:08:27.070 "method": "env_dpdk_get_mem_stats", 00:08:27.070 "req_id": 1 00:08:27.070 } 00:08:27.070 Got JSON-RPC error response 00:08:27.070 response: 00:08:27.070 { 00:08:27.070 "code": -32601, 00:08:27.070 "message": "Method not found" 00:08:27.070 } 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.070 20:05:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59704 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59704 ']' 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59704 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59704 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59704' 00:08:27.070 killing process with pid 59704 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@969 -- # kill 59704 00:08:27.070 20:05:12 app_cmdline -- common/autotest_common.sh@974 -- # wait 59704 00:08:29.645 00:08:29.645 real 0m4.110s 00:08:29.645 user 0m4.505s 00:08:29.645 sys 0m0.679s 00:08:29.645 20:05:14 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.645 20:05:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:29.645 ************************************ 00:08:29.645 END TEST app_cmdline 00:08:29.645 ************************************ 00:08:29.645 20:05:14 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:29.645 20:05:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.645 20:05:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.645 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:08:29.645 ************************************ 00:08:29.645 START TEST version 00:08:29.645 ************************************ 00:08:29.645 20:05:14 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:29.645 * Looking for test storage... 00:08:29.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:29.645 20:05:14 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:29.645 20:05:14 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:29.645 20:05:14 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:29.645 20:05:14 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:29.645 20:05:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.645 20:05:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.645 20:05:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.645 20:05:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.645 20:05:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.645 20:05:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.645 20:05:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.645 20:05:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.645 20:05:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.645 20:05:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.645 20:05:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.646 20:05:14 version -- scripts/common.sh@344 -- # case "$op" in 00:08:29.646 20:05:14 version -- scripts/common.sh@345 -- # : 1 00:08:29.646 20:05:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.646 20:05:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.646 20:05:14 version -- scripts/common.sh@365 -- # decimal 1 00:08:29.646 20:05:14 version -- scripts/common.sh@353 -- # local d=1 00:08:29.646 20:05:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.646 20:05:14 version -- scripts/common.sh@355 -- # echo 1 00:08:29.646 20:05:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.646 20:05:14 version -- scripts/common.sh@366 -- # decimal 2 00:08:29.646 20:05:14 version -- scripts/common.sh@353 -- # local d=2 00:08:29.646 20:05:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.646 20:05:14 version -- scripts/common.sh@355 -- # echo 2 00:08:29.646 20:05:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.646 20:05:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.646 20:05:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.646 20:05:14 version -- scripts/common.sh@368 -- # return 0 00:08:29.646 20:05:14 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.646 20:05:14 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:29.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.646 --rc genhtml_branch_coverage=1 00:08:29.646 --rc genhtml_function_coverage=1 00:08:29.646 --rc genhtml_legend=1 00:08:29.646 --rc geninfo_all_blocks=1 00:08:29.646 --rc geninfo_unexecuted_blocks=1 00:08:29.646 00:08:29.646 ' 00:08:29.646 20:05:14 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:29.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.646 --rc genhtml_branch_coverage=1 00:08:29.646 --rc genhtml_function_coverage=1 00:08:29.646 --rc genhtml_legend=1 00:08:29.646 --rc geninfo_all_blocks=1 00:08:29.646 --rc geninfo_unexecuted_blocks=1 00:08:29.646 00:08:29.646 ' 00:08:29.646 20:05:14 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:29.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.646 --rc genhtml_branch_coverage=1 00:08:29.646 --rc genhtml_function_coverage=1 00:08:29.646 --rc genhtml_legend=1 00:08:29.646 --rc geninfo_all_blocks=1 00:08:29.646 --rc geninfo_unexecuted_blocks=1 00:08:29.646 00:08:29.646 ' 00:08:29.646 20:05:14 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:29.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.646 --rc genhtml_branch_coverage=1 00:08:29.646 --rc genhtml_function_coverage=1 00:08:29.646 --rc genhtml_legend=1 00:08:29.646 --rc geninfo_all_blocks=1 00:08:29.646 --rc geninfo_unexecuted_blocks=1 00:08:29.646 00:08:29.646 ' 00:08:29.646 20:05:14 version -- app/version.sh@17 -- # get_header_version major 00:08:29.646 20:05:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:29.646 20:05:14 version -- app/version.sh@14 -- # cut -f2 00:08:29.646 20:05:14 version -- app/version.sh@14 -- # tr -d '"' 00:08:29.646 20:05:14 version -- app/version.sh@17 -- # major=25 00:08:29.646 20:05:14 version -- app/version.sh@18 -- # get_header_version minor 00:08:29.646 20:05:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:29.646 20:05:14 version -- app/version.sh@14 -- # cut -f2 00:08:29.646 20:05:14 version -- app/version.sh@14 -- # tr -d '"' 00:08:29.646 20:05:14 version -- app/version.sh@18 -- # minor=1 00:08:29.646 20:05:14 version -- app/version.sh@19 -- # get_header_version patch 00:08:29.646 20:05:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:29.646 20:05:14 version -- app/version.sh@14 -- # cut -f2 00:08:29.646 20:05:14 version -- app/version.sh@14 -- # tr -d '"' 00:08:29.646 20:05:14 version -- app/version.sh@19 -- # patch=0 00:08:29.646 20:05:14 version -- app/version.sh@20 -- # get_header_version suffix 00:08:29.646 20:05:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:29.646 20:05:14 version -- app/version.sh@14 -- # cut -f2 00:08:29.646 20:05:14 version -- app/version.sh@14 -- # tr -d '"' 00:08:29.646 20:05:14 version -- app/version.sh@20 -- # suffix=-pre 00:08:29.646 20:05:14 version -- app/version.sh@22 -- # version=25.1 00:08:29.646 20:05:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:29.646 20:05:14 version -- app/version.sh@28 -- # version=25.1rc0 00:08:29.646 20:05:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:29.646 20:05:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:29.646 20:05:14 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:29.646 20:05:14 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:29.646 00:08:29.646 real 0m0.261s 00:08:29.646 user 0m0.182s 00:08:29.646 sys 0m0.110s 00:08:29.646 20:05:14 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.646 20:05:14 version -- common/autotest_common.sh@10 -- # set +x 00:08:29.646 ************************************ 00:08:29.646 END TEST version 00:08:29.646 ************************************ 00:08:29.646 20:05:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:29.646 20:05:15 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:29.646 20:05:15 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:29.646 20:05:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.646 20:05:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.646 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:08:29.646 ************************************ 00:08:29.646 START TEST bdev_raid 00:08:29.646 ************************************ 00:08:29.646 20:05:15 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:29.646 * Looking for test storage... 00:08:29.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:29.646 20:05:15 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:29.646 20:05:15 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:08:29.646 20:05:15 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:29.646 20:05:15 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.646 20:05:15 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:29.646 20:05:15 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.646 20:05:15 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:29.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.646 --rc genhtml_branch_coverage=1 00:08:29.646 --rc genhtml_function_coverage=1 00:08:29.646 --rc genhtml_legend=1 00:08:29.646 --rc geninfo_all_blocks=1 00:08:29.646 --rc geninfo_unexecuted_blocks=1 00:08:29.646 00:08:29.646 ' 00:08:29.646 20:05:15 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:29.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.646 --rc genhtml_branch_coverage=1 00:08:29.646 --rc genhtml_function_coverage=1 00:08:29.646 --rc genhtml_legend=1 00:08:29.646 --rc geninfo_all_blocks=1 00:08:29.646 --rc geninfo_unexecuted_blocks=1 00:08:29.646 00:08:29.646 ' 00:08:29.646 20:05:15 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:29.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.646 --rc genhtml_branch_coverage=1 00:08:29.646 --rc genhtml_function_coverage=1 00:08:29.646 --rc genhtml_legend=1 00:08:29.646 --rc geninfo_all_blocks=1 00:08:29.646 --rc geninfo_unexecuted_blocks=1 00:08:29.646 00:08:29.646 ' 00:08:29.646 20:05:15 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:29.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.646 --rc genhtml_branch_coverage=1 00:08:29.646 --rc genhtml_function_coverage=1 00:08:29.646 --rc genhtml_legend=1 00:08:29.646 --rc geninfo_all_blocks=1 00:08:29.646 --rc geninfo_unexecuted_blocks=1 00:08:29.646 00:08:29.646 ' 00:08:29.646 20:05:15 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:29.646 20:05:15 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:29.646 20:05:15 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:29.646 20:05:15 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:29.646 20:05:15 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:29.646 20:05:15 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:29.646 20:05:15 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:29.646 20:05:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.646 20:05:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.647 20:05:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.647 ************************************ 00:08:29.647 START TEST raid1_resize_data_offset_test 00:08:29.647 ************************************ 00:08:29.647 20:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:08:29.647 20:05:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59897 00:08:29.647 Process raid pid: 59897 00:08:29.647 20:05:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59897' 00:08:29.647 20:05:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59897 00:08:29.647 20:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 59897 ']' 00:08:29.647 20:05:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:29.647 20:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.647 20:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.647 20:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.647 20:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.647 20:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.905 [2024-10-17 20:05:15.350701] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:29.905 [2024-10-17 20:05:15.351586] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.905 [2024-10-17 20:05:15.527833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.163 [2024-10-17 20:05:15.651969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.420 [2024-10-17 20:05:15.851479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.420 [2024-10-17 20:05:15.851537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.735 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.735 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:08:30.735 20:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:30.735 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.735 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.993 malloc0 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.993 malloc1 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.993 null0 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.993 [2024-10-17 20:05:16.524169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:30.993 [2024-10-17 20:05:16.526529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:30.993 [2024-10-17 20:05:16.526601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:30.993 [2024-10-17 20:05:16.526862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:30.993 [2024-10-17 20:05:16.526884] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:30.993 [2024-10-17 20:05:16.527285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:30.993 [2024-10-17 20:05:16.527526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:30.993 [2024-10-17 20:05:16.527548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:30.993 [2024-10-17 20:05:16.527719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.993 [2024-10-17 20:05:16.588186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.993 20:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.560 malloc2 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.560 [2024-10-17 20:05:17.108514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:31.560 [2024-10-17 20:05:17.126356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.560 [2024-10-17 20:05:17.128828] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59897 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 59897 ']' 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 59897 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.560 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59897 00:08:31.817 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:31.817 killing process with pid 59897 00:08:31.817 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:31.817 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59897' 00:08:31.817 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 59897 00:08:31.817 20:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 59897 00:08:31.817 [2024-10-17 20:05:17.222171] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.817 [2024-10-17 20:05:17.224491] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:31.817 [2024-10-17 20:05:17.224630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.817 [2024-10-17 20:05:17.224657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:31.817 [2024-10-17 20:05:17.258172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.818 [2024-10-17 20:05:17.258677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.818 [2024-10-17 20:05:17.258704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:33.195 [2024-10-17 20:05:18.835786] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.570 20:05:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:34.570 00:08:34.570 real 0m4.620s 00:08:34.570 user 0m4.566s 00:08:34.570 sys 0m0.674s 00:08:34.570 ************************************ 00:08:34.570 20:05:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.570 20:05:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.570 END TEST raid1_resize_data_offset_test 00:08:34.570 ************************************ 00:08:34.570 20:05:19 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:34.570 20:05:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:34.570 20:05:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.570 20:05:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.570 ************************************ 00:08:34.570 START TEST raid0_resize_superblock_test 00:08:34.570 ************************************ 00:08:34.570 20:05:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:08:34.570 20:05:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:34.570 20:05:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59975 00:08:34.570 Process raid pid: 59975 00:08:34.570 20:05:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59975' 00:08:34.570 20:05:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59975 00:08:34.570 20:05:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:34.570 20:05:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 59975 ']' 00:08:34.570 20:05:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.570 20:05:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.570 20:05:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.570 20:05:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.570 20:05:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.570 [2024-10-17 20:05:20.022599] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:34.570 [2024-10-17 20:05:20.022774] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.570 [2024-10-17 20:05:20.200298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.828 [2024-10-17 20:05:20.326342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.087 [2024-10-17 20:05:20.539717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.087 [2024-10-17 20:05:20.539760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.346 20:05:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.346 20:05:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:35.346 20:05:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:35.346 20:05:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.346 20:05:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.913 malloc0 00:08:35.913 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.913 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:35.913 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.913 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.913 [2024-10-17 20:05:21.501365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:35.913 [2024-10-17 20:05:21.501473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.913 [2024-10-17 20:05:21.501505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:35.913 [2024-10-17 20:05:21.501525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.913 [2024-10-17 20:05:21.504250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.913 [2024-10-17 20:05:21.504519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:35.913 pt0 00:08:35.913 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.913 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:35.913 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.913 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.172 3969f3f9-317d-4baa-8f5c-5cdc62607632 00:08:36.172 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.172 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:36.172 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.172 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.172 d39d4395-d062-431c-9bc7-98ffbfadfac5 00:08:36.172 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.172 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:36.172 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.173 d44f64f4-c200-43b9-8326-1a5423ca2247 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.173 [2024-10-17 20:05:21.650224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d39d4395-d062-431c-9bc7-98ffbfadfac5 is claimed 00:08:36.173 [2024-10-17 20:05:21.650346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d44f64f4-c200-43b9-8326-1a5423ca2247 is claimed 00:08:36.173 [2024-10-17 20:05:21.650513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:36.173 [2024-10-17 20:05:21.650538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:36.173 [2024-10-17 20:05:21.650839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:36.173 [2024-10-17 20:05:21.651170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:36.173 [2024-10-17 20:05:21.651189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:36.173 [2024-10-17 20:05:21.651404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.173 [2024-10-17 20:05:21.770546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.173 [2024-10-17 20:05:21.814616] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:36.173 [2024-10-17 20:05:21.814821] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd39d4395-d062-431c-9bc7-98ffbfadfac5' was resized: old size 131072, new size 204800 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.173 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.173 [2024-10-17 20:05:21.822509] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:36.173 [2024-10-17 20:05:21.822536] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd44f64f4-c200-43b9-8326-1a5423ca2247' was resized: old size 131072, new size 204800 00:08:36.173 [2024-10-17 20:05:21.822588] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:36.432 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.433 [2024-10-17 20:05:21.942637] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.433 [2024-10-17 20:05:21.986459] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:36.433 [2024-10-17 20:05:21.986562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:36.433 [2024-10-17 20:05:21.986580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.433 [2024-10-17 20:05:21.986602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:36.433 [2024-10-17 20:05:21.986730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.433 [2024-10-17 20:05:21.986779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.433 [2024-10-17 20:05:21.986800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.433 [2024-10-17 20:05:21.994291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:36.433 [2024-10-17 20:05:21.994399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.433 [2024-10-17 20:05:21.994444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:36.433 [2024-10-17 20:05:21.994463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.433 [2024-10-17 20:05:21.997429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.433 [2024-10-17 20:05:21.997492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:36.433 pt0 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.433 20:05:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.433 [2024-10-17 20:05:21.999997] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d39d4395-d062-431c-9bc7-98ffbfadfac5 00:08:36.433 [2024-10-17 20:05:22.000139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d39d4395-d062-431c-9bc7-98ffbfadfac5 is claimed 00:08:36.433 [2024-10-17 20:05:22.000282] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d44f64f4-c200-43b9-8326-1a5423ca2247 00:08:36.433 [2024-10-17 20:05:22.000324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d44f64f4-c200-43b9-8326-1a5423ca2247 is claimed 00:08:36.433 [2024-10-17 20:05:22.000484] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev d44f64f4-c200-43b9-8326-1a5423ca2247 (2) smaller than existing raid bdev Raid (3) 00:08:36.433 [2024-10-17 20:05:22.000520] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev d39d4395-d062-431c-9bc7-98ffbfadfac5: File exists 00:08:36.433 [2024-10-17 20:05:22.000614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:36.433 [2024-10-17 20:05:22.000634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:36.433 [2024-10-17 20:05:22.000975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:36.433 [2024-10-17 20:05:22.001209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:36.433 [2024-10-17 20:05:22.001226] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:36.433 [2024-10-17 20:05:22.001411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.433 [2024-10-17 20:05:22.014724] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59975 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 59975 ']' 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 59975 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.433 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59975 00:08:36.692 killing process with pid 59975 00:08:36.692 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.692 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.692 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59975' 00:08:36.692 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 59975 00:08:36.692 [2024-10-17 20:05:22.091713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.692 20:05:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 59975 00:08:36.692 [2024-10-17 20:05:22.091813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.692 [2024-10-17 20:05:22.091877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.692 [2024-10-17 20:05:22.091892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:38.105 [2024-10-17 20:05:23.315901] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.672 ************************************ 00:08:38.672 END TEST raid0_resize_superblock_test 00:08:38.672 ************************************ 00:08:38.672 20:05:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:38.672 00:08:38.672 real 0m4.362s 00:08:38.672 user 0m4.681s 00:08:38.672 sys 0m0.614s 00:08:38.672 20:05:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.672 20:05:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.672 20:05:24 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:38.672 20:05:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.672 20:05:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.672 20:05:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.931 ************************************ 00:08:38.931 START TEST raid1_resize_superblock_test 00:08:38.931 ************************************ 00:08:38.931 20:05:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:08:38.931 20:05:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:38.931 20:05:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60074 00:08:38.931 20:05:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60074' 00:08:38.931 Process raid pid: 60074 00:08:38.931 20:05:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:38.931 20:05:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60074 00:08:38.931 20:05:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60074 ']' 00:08:38.931 20:05:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.931 20:05:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.931 20:05:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.931 20:05:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.931 20:05:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.931 [2024-10-17 20:05:24.437469] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:38.931 [2024-10-17 20:05:24.437962] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.190 [2024-10-17 20:05:24.614130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.190 [2024-10-17 20:05:24.736756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.449 [2024-10-17 20:05:24.929359] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.449 [2024-10-17 20:05:24.929421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.017 20:05:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.017 20:05:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:40.017 20:05:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:40.017 20:05:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.017 20:05:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.276 malloc0 00:08:40.276 20:05:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.276 20:05:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:40.276 20:05:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.276 20:05:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 [2024-10-17 20:05:25.929594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:40.535 [2024-10-17 20:05:25.929688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.535 [2024-10-17 20:05:25.929727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:40.535 [2024-10-17 20:05:25.929747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.535 [2024-10-17 20:05:25.932771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.535 [2024-10-17 20:05:25.933047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:40.535 pt0 00:08:40.535 20:05:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.535 20:05:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:40.535 20:05:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.535 20:05:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 575fa45a-4a3b-433d-9995-1843c6dd5ddf 00:08:40.535 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.535 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:40.535 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.535 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 9ca527e6-4757-4b7a-a3dc-5ff54924d05a 00:08:40.535 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.535 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:40.535 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.535 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 ebbe0ecc-0997-4ae0-9601-268503289ac5 00:08:40.535 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.535 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:40.535 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:40.535 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.535 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 [2024-10-17 20:05:26.077417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9ca527e6-4757-4b7a-a3dc-5ff54924d05a is claimed 00:08:40.535 [2024-10-17 20:05:26.077547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ebbe0ecc-0997-4ae0-9601-268503289ac5 is claimed 00:08:40.535 [2024-10-17 20:05:26.077769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:40.535 [2024-10-17 20:05:26.077815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:40.535 [2024-10-17 20:05:26.078221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:40.535 [2024-10-17 20:05:26.078491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:40.535 [2024-10-17 20:05:26.078562] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:40.536 [2024-10-17 20:05:26.078770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.536 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.536 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:40.536 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:40.536 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.536 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.536 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.536 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:40.536 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:40.536 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:40.536 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.536 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.536 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.794 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:40.794 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:40.794 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:40.795 [2024-10-17 20:05:26.193709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.795 [2024-10-17 20:05:26.245664] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:40.795 [2024-10-17 20:05:26.245852] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9ca527e6-4757-4b7a-a3dc-5ff54924d05a' was resized: old size 131072, new size 204800 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.795 [2024-10-17 20:05:26.253612] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:40.795 [2024-10-17 20:05:26.253639] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ebbe0ecc-0997-4ae0-9601-268503289ac5' was resized: old size 131072, new size 204800 00:08:40.795 [2024-10-17 20:05:26.253692] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:40.795 [2024-10-17 20:05:26.393780] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.795 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.795 [2024-10-17 20:05:26.445541] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:40.795 [2024-10-17 20:05:26.445808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:40.795 [2024-10-17 20:05:26.445959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:41.122 [2024-10-17 20:05:26.446316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.122 [2024-10-17 20:05:26.446654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.122 [2024-10-17 20:05:26.446741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.122 [2024-10-17 20:05:26.446762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.122 [2024-10-17 20:05:26.453478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:41.122 [2024-10-17 20:05:26.453565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.122 [2024-10-17 20:05:26.453593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:41.122 [2024-10-17 20:05:26.453609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.122 [2024-10-17 20:05:26.456632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.122 [2024-10-17 20:05:26.456698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:41.122 pt0 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.122 [2024-10-17 20:05:26.459125] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9ca527e6-4757-4b7a-a3dc-5ff54924d05a 00:08:41.122 [2024-10-17 20:05:26.459212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9ca527e6-4757-4b7a-a3dc-5ff54924d05a is claimed 00:08:41.122 [2024-10-17 20:05:26.459365] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ebbe0ecc-0997-4ae0-9601-268503289ac5 00:08:41.122 [2024-10-17 20:05:26.459406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ebbe0ecc-0997-4ae0-9601-268503289ac5 is claimed 00:08:41.122 [2024-10-17 20:05:26.459560] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ebbe0ecc-0997-4ae0-9601-268503289ac5 (2) smaller than existing raid bdev Raid (3) 00:08:41.122 [2024-10-17 20:05:26.459647] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 9ca527e6-4757-4b7a-a3dc-5ff54924d05a: File exists 00:08:41.122 [2024-10-17 20:05:26.459713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:41.122 [2024-10-17 20:05:26.459733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:41.122 [2024-10-17 20:05:26.460079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:41.122 [2024-10-17 20:05:26.460297] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:41.122 [2024-10-17 20:05:26.460313] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:41.122 [2024-10-17 20:05:26.460548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.122 [2024-10-17 20:05:26.473778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60074 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60074 ']' 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60074 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.122 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60074 00:08:41.123 killing process with pid 60074 00:08:41.123 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.123 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.123 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60074' 00:08:41.123 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60074 00:08:41.123 [2024-10-17 20:05:26.553491] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.123 [2024-10-17 20:05:26.553569] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.123 [2024-10-17 20:05:26.553663] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.123 [2024-10-17 20:05:26.553677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:41.123 20:05:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60074 00:08:42.504 [2024-10-17 20:05:27.739728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.072 20:05:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:43.072 00:08:43.072 real 0m4.372s 00:08:43.072 user 0m4.698s 00:08:43.072 sys 0m0.647s 00:08:43.072 20:05:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.072 ************************************ 00:08:43.072 END TEST raid1_resize_superblock_test 00:08:43.072 ************************************ 00:08:43.072 20:05:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.330 20:05:28 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:43.330 20:05:28 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:43.330 20:05:28 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:43.330 20:05:28 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:43.330 20:05:28 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:43.330 20:05:28 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:43.330 20:05:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:43.330 20:05:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.330 20:05:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.330 ************************************ 00:08:43.330 START TEST raid_function_test_raid0 00:08:43.330 ************************************ 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:43.330 Process raid pid: 60171 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60171 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60171' 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60171 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60171 ']' 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.330 20:05:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:43.330 [2024-10-17 20:05:28.891401] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:43.330 [2024-10-17 20:05:28.891907] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.589 [2024-10-17 20:05:29.067671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.589 [2024-10-17 20:05:29.190264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.848 [2024-10-17 20:05:29.390850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.848 [2024-10-17 20:05:29.390910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:44.417 Base_1 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:44.417 Base_2 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:44.417 [2024-10-17 20:05:29.960477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:44.417 [2024-10-17 20:05:29.963000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:44.417 [2024-10-17 20:05:29.963106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:44.417 [2024-10-17 20:05:29.963339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:44.417 [2024-10-17 20:05:29.963717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:44.417 [2024-10-17 20:05:29.963924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:44.417 [2024-10-17 20:05:29.963941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:44.417 [2024-10-17 20:05:29.964179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:44.417 20:05:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.417 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:44.417 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:44.417 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:44.417 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:44.417 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:44.417 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:44.417 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:44.417 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:44.417 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:44.417 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:44.417 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:44.417 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:44.678 [2024-10-17 20:05:30.252620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:44.678 /dev/nbd0 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.678 1+0 records in 00:08:44.678 1+0 records out 00:08:44.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345374 s, 11.9 MB/s 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:44.678 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:45.246 { 00:08:45.246 "nbd_device": "/dev/nbd0", 00:08:45.246 "bdev_name": "raid" 00:08:45.246 } 00:08:45.246 ]' 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:45.246 { 00:08:45.246 "nbd_device": "/dev/nbd0", 00:08:45.246 "bdev_name": "raid" 00:08:45.246 } 00:08:45.246 ]' 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:45.246 4096+0 records in 00:08:45.246 4096+0 records out 00:08:45.246 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0308066 s, 68.1 MB/s 00:08:45.246 20:05:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:45.521 4096+0 records in 00:08:45.521 4096+0 records out 00:08:45.521 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.32055 s, 6.5 MB/s 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:45.522 128+0 records in 00:08:45.522 128+0 records out 00:08:45.522 65536 bytes (66 kB, 64 KiB) copied, 0.000437943 s, 150 MB/s 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:45.522 2035+0 records in 00:08:45.522 2035+0 records out 00:08:45.522 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0122699 s, 84.9 MB/s 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:45.522 456+0 records in 00:08:45.522 456+0 records out 00:08:45.522 233472 bytes (233 kB, 228 KiB) copied, 0.00218261 s, 107 MB/s 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.522 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:46.089 [2024-10-17 20:05:31.478882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.089 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:46.089 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:46.089 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:46.089 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.089 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.089 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:46.089 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:46.089 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.089 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:46.089 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:46.089 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:46.347 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:46.347 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:46.347 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.347 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:46.347 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60171 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60171 ']' 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60171 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60171 00:08:46.348 killing process with pid 60171 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60171' 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60171 00:08:46.348 [2024-10-17 20:05:31.843095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.348 20:05:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60171 00:08:46.348 [2024-10-17 20:05:31.843217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.348 [2024-10-17 20:05:31.843280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.348 [2024-10-17 20:05:31.843299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:46.606 [2024-10-17 20:05:32.016485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.543 20:05:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:47.543 00:08:47.543 real 0m4.194s 00:08:47.543 user 0m5.154s 00:08:47.543 sys 0m1.033s 00:08:47.543 20:05:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.543 20:05:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:47.543 ************************************ 00:08:47.543 END TEST raid_function_test_raid0 00:08:47.543 ************************************ 00:08:47.543 20:05:33 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:47.543 20:05:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:47.543 20:05:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.543 20:05:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.543 ************************************ 00:08:47.543 START TEST raid_function_test_concat 00:08:47.543 ************************************ 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60300 00:08:47.543 Process raid pid: 60300 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60300' 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60300 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60300 ']' 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.543 20:05:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:47.543 [2024-10-17 20:05:33.141446] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:47.543 [2024-10-17 20:05:33.141653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.802 [2024-10-17 20:05:33.317610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.802 [2024-10-17 20:05:33.448100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.060 [2024-10-17 20:05:33.641109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.060 [2024-10-17 20:05:33.641157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:48.627 Base_1 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:48.627 Base_2 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:48.627 [2024-10-17 20:05:34.199683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:48.627 [2024-10-17 20:05:34.202214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:48.627 [2024-10-17 20:05:34.202312] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:48.627 [2024-10-17 20:05:34.202347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:48.627 [2024-10-17 20:05:34.202663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:48.627 [2024-10-17 20:05:34.202837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:48.627 [2024-10-17 20:05:34.202853] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:48.627 [2024-10-17 20:05:34.203112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.627 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:48.628 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:48.887 [2024-10-17 20:05:34.527807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:49.145 /dev/nbd0 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.145 1+0 records in 00:08:49.145 1+0 records out 00:08:49.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270467 s, 15.1 MB/s 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:49.145 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:49.403 { 00:08:49.403 "nbd_device": "/dev/nbd0", 00:08:49.403 "bdev_name": "raid" 00:08:49.403 } 00:08:49.403 ]' 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:49.403 { 00:08:49.403 "nbd_device": "/dev/nbd0", 00:08:49.403 "bdev_name": "raid" 00:08:49.403 } 00:08:49.403 ]' 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:49.403 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:49.404 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:49.404 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:49.404 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:49.404 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:49.404 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:49.404 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:49.404 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:49.404 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:49.404 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:49.404 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:49.404 4096+0 records in 00:08:49.404 4096+0 records out 00:08:49.404 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0233145 s, 90.0 MB/s 00:08:49.404 20:05:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:49.661 4096+0 records in 00:08:49.661 4096+0 records out 00:08:49.661 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.257937 s, 8.1 MB/s 00:08:49.661 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:49.661 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:49.661 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:49.661 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:49.661 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:49.662 128+0 records in 00:08:49.662 128+0 records out 00:08:49.662 65536 bytes (66 kB, 64 KiB) copied, 0.000859126 s, 76.3 MB/s 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:49.662 2035+0 records in 00:08:49.662 2035+0 records out 00:08:49.662 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00851257 s, 122 MB/s 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:49.662 456+0 records in 00:08:49.662 456+0 records out 00:08:49.662 233472 bytes (233 kB, 228 KiB) copied, 0.00172376 s, 135 MB/s 00:08:49.662 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:49.919 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:49.919 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:49.919 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:49.920 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:49.920 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:49.920 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:49.920 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:49.920 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:49.920 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:49.920 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:49.920 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.920 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:50.177 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:50.178 [2024-10-17 20:05:35.643206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.178 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:50.178 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:50.178 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.178 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.178 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:50.178 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:50.178 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.178 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:50.178 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:50.178 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60300 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60300 ']' 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60300 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60300 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.435 killing process with pid 60300 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60300' 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60300 00:08:50.435 [2024-10-17 20:05:35.992442] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.435 20:05:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60300 00:08:50.435 [2024-10-17 20:05:35.992559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.435 [2024-10-17 20:05:35.992625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.435 [2024-10-17 20:05:35.992649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:50.693 [2024-10-17 20:05:36.154067] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.626 20:05:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:51.626 00:08:51.626 real 0m4.069s 00:08:51.626 user 0m5.083s 00:08:51.626 sys 0m0.939s 00:08:51.626 ************************************ 00:08:51.626 END TEST raid_function_test_concat 00:08:51.626 ************************************ 00:08:51.626 20:05:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.626 20:05:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:51.626 20:05:37 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:51.626 20:05:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:51.627 20:05:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.627 20:05:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.627 ************************************ 00:08:51.627 START TEST raid0_resize_test 00:08:51.627 ************************************ 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60429 00:08:51.627 Process raid pid: 60429 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60429' 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60429 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60429 ']' 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.627 20:05:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.627 [2024-10-17 20:05:37.268818] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:51.627 [2024-10-17 20:05:37.269020] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.883 [2024-10-17 20:05:37.444287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.140 [2024-10-17 20:05:37.568177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.140 [2024-10-17 20:05:37.751251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.140 [2024-10-17 20:05:37.751315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.707 Base_1 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.707 Base_2 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.707 [2024-10-17 20:05:38.220246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:52.707 [2024-10-17 20:05:38.222565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:52.707 [2024-10-17 20:05:38.222653] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:52.707 [2024-10-17 20:05:38.222673] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:52.707 [2024-10-17 20:05:38.222980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:52.707 [2024-10-17 20:05:38.223176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:52.707 [2024-10-17 20:05:38.223194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:52.707 [2024-10-17 20:05:38.223399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.707 [2024-10-17 20:05:38.228220] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:52.707 [2024-10-17 20:05:38.228258] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:52.707 true 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.707 [2024-10-17 20:05:38.240419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:52.707 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.708 [2024-10-17 20:05:38.288217] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:52.708 [2024-10-17 20:05:38.288248] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:52.708 [2024-10-17 20:05:38.288286] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:52.708 true 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:52.708 [2024-10-17 20:05:38.300457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60429 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60429 ']' 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60429 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.708 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60429 00:08:52.966 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:52.966 killing process with pid 60429 00:08:52.966 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:52.966 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60429' 00:08:52.966 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60429 00:08:52.966 [2024-10-17 20:05:38.382223] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.966 20:05:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60429 00:08:52.966 [2024-10-17 20:05:38.382335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.966 [2024-10-17 20:05:38.382432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.966 [2024-10-17 20:05:38.382447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:52.966 [2024-10-17 20:05:38.398674] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.901 20:05:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:53.901 00:08:53.901 real 0m2.198s 00:08:53.901 user 0m2.403s 00:08:53.901 sys 0m0.384s 00:08:53.901 20:05:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.901 20:05:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.901 ************************************ 00:08:53.901 END TEST raid0_resize_test 00:08:53.901 ************************************ 00:08:53.901 20:05:39 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:53.901 20:05:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.901 20:05:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.901 20:05:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.901 ************************************ 00:08:53.901 START TEST raid1_resize_test 00:08:53.901 ************************************ 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60485 00:08:53.901 Process raid pid: 60485 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60485' 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60485 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60485 ']' 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.901 20:05:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.901 [2024-10-17 20:05:39.527455] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:53.901 [2024-10-17 20:05:39.527664] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.160 [2024-10-17 20:05:39.697276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.418 [2024-10-17 20:05:39.817188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.418 [2024-10-17 20:05:40.019049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.418 [2024-10-17 20:05:40.019127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.003 Base_1 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.003 Base_2 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.003 [2024-10-17 20:05:40.489127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:55.003 [2024-10-17 20:05:40.491548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:55.003 [2024-10-17 20:05:40.491626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:55.003 [2024-10-17 20:05:40.491643] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:55.003 [2024-10-17 20:05:40.491915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:55.003 [2024-10-17 20:05:40.492156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:55.003 [2024-10-17 20:05:40.492174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:55.003 [2024-10-17 20:05:40.492354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.003 [2024-10-17 20:05:40.497096] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:55.003 [2024-10-17 20:05:40.497131] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:55.003 true 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.003 [2024-10-17 20:05:40.509299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.003 [2024-10-17 20:05:40.561137] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:55.003 [2024-10-17 20:05:40.561173] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:55.003 [2024-10-17 20:05:40.561220] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:55.003 true 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:55.003 [2024-10-17 20:05:40.573304] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60485 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60485 ']' 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60485 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60485 00:08:55.003 killing process with pid 60485 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60485' 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60485 00:08:55.003 [2024-10-17 20:05:40.654069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.003 20:05:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60485 00:08:55.003 [2024-10-17 20:05:40.654187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.261 [2024-10-17 20:05:40.654851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.261 [2024-10-17 20:05:40.654877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:55.261 [2024-10-17 20:05:40.669899] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.196 20:05:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:56.196 00:08:56.196 real 0m2.238s 00:08:56.196 user 0m2.452s 00:08:56.196 sys 0m0.389s 00:08:56.196 ************************************ 00:08:56.196 END TEST raid1_resize_test 00:08:56.196 ************************************ 00:08:56.196 20:05:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.196 20:05:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.196 20:05:41 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:56.196 20:05:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:56.196 20:05:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:56.196 20:05:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:56.196 20:05:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.196 20:05:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.196 ************************************ 00:08:56.196 START TEST raid_state_function_test 00:08:56.196 ************************************ 00:08:56.196 20:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:08:56.196 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:56.196 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:56.196 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:56.196 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:56.196 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.197 Process raid pid: 60553 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60553 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60553' 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60553 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60553 ']' 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.197 20:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.197 [2024-10-17 20:05:41.801675] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:08:56.197 [2024-10-17 20:05:41.802134] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.455 [2024-10-17 20:05:41.965949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.455 [2024-10-17 20:05:42.092249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.713 [2024-10-17 20:05:42.292419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.713 [2024-10-17 20:05:42.292778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.280 20:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.280 20:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:57.280 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:57.280 20:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.280 20:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.280 [2024-10-17 20:05:42.845764] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.280 [2024-10-17 20:05:42.845864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.280 [2024-10-17 20:05:42.845881] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.280 [2024-10-17 20:05:42.845906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.280 20:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.280 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:57.280 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.280 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.280 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.280 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.280 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.280 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.281 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.281 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.281 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.281 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.281 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.281 20:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.281 20:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.281 20:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.281 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.281 "name": "Existed_Raid", 00:08:57.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.281 "strip_size_kb": 64, 00:08:57.281 "state": "configuring", 00:08:57.281 "raid_level": "raid0", 00:08:57.281 "superblock": false, 00:08:57.281 "num_base_bdevs": 2, 00:08:57.281 "num_base_bdevs_discovered": 0, 00:08:57.281 "num_base_bdevs_operational": 2, 00:08:57.281 "base_bdevs_list": [ 00:08:57.281 { 00:08:57.281 "name": "BaseBdev1", 00:08:57.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.281 "is_configured": false, 00:08:57.281 "data_offset": 0, 00:08:57.281 "data_size": 0 00:08:57.281 }, 00:08:57.281 { 00:08:57.281 "name": "BaseBdev2", 00:08:57.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.281 "is_configured": false, 00:08:57.281 "data_offset": 0, 00:08:57.281 "data_size": 0 00:08:57.281 } 00:08:57.281 ] 00:08:57.281 }' 00:08:57.281 20:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.281 20:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.847 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.847 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.847 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.847 [2024-10-17 20:05:43.325894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.847 [2024-10-17 20:05:43.325946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:57.847 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.847 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:57.847 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.847 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.847 [2024-10-17 20:05:43.333897] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.847 [2024-10-17 20:05:43.333981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.847 [2024-10-17 20:05:43.333995] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.847 [2024-10-17 20:05:43.334051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.847 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.847 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.847 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.847 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.847 [2024-10-17 20:05:43.378229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.847 BaseBdev1 00:08:57.847 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.847 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.848 [ 00:08:57.848 { 00:08:57.848 "name": "BaseBdev1", 00:08:57.848 "aliases": [ 00:08:57.848 "2667c52c-ea36-40c4-811b-57b0fb3e6204" 00:08:57.848 ], 00:08:57.848 "product_name": "Malloc disk", 00:08:57.848 "block_size": 512, 00:08:57.848 "num_blocks": 65536, 00:08:57.848 "uuid": "2667c52c-ea36-40c4-811b-57b0fb3e6204", 00:08:57.848 "assigned_rate_limits": { 00:08:57.848 "rw_ios_per_sec": 0, 00:08:57.848 "rw_mbytes_per_sec": 0, 00:08:57.848 "r_mbytes_per_sec": 0, 00:08:57.848 "w_mbytes_per_sec": 0 00:08:57.848 }, 00:08:57.848 "claimed": true, 00:08:57.848 "claim_type": "exclusive_write", 00:08:57.848 "zoned": false, 00:08:57.848 "supported_io_types": { 00:08:57.848 "read": true, 00:08:57.848 "write": true, 00:08:57.848 "unmap": true, 00:08:57.848 "flush": true, 00:08:57.848 "reset": true, 00:08:57.848 "nvme_admin": false, 00:08:57.848 "nvme_io": false, 00:08:57.848 "nvme_io_md": false, 00:08:57.848 "write_zeroes": true, 00:08:57.848 "zcopy": true, 00:08:57.848 "get_zone_info": false, 00:08:57.848 "zone_management": false, 00:08:57.848 "zone_append": false, 00:08:57.848 "compare": false, 00:08:57.848 "compare_and_write": false, 00:08:57.848 "abort": true, 00:08:57.848 "seek_hole": false, 00:08:57.848 "seek_data": false, 00:08:57.848 "copy": true, 00:08:57.848 "nvme_iov_md": false 00:08:57.848 }, 00:08:57.848 "memory_domains": [ 00:08:57.848 { 00:08:57.848 "dma_device_id": "system", 00:08:57.848 "dma_device_type": 1 00:08:57.848 }, 00:08:57.848 { 00:08:57.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.848 "dma_device_type": 2 00:08:57.848 } 00:08:57.848 ], 00:08:57.848 "driver_specific": {} 00:08:57.848 } 00:08:57.848 ] 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.848 "name": "Existed_Raid", 00:08:57.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.848 "strip_size_kb": 64, 00:08:57.848 "state": "configuring", 00:08:57.848 "raid_level": "raid0", 00:08:57.848 "superblock": false, 00:08:57.848 "num_base_bdevs": 2, 00:08:57.848 "num_base_bdevs_discovered": 1, 00:08:57.848 "num_base_bdevs_operational": 2, 00:08:57.848 "base_bdevs_list": [ 00:08:57.848 { 00:08:57.848 "name": "BaseBdev1", 00:08:57.848 "uuid": "2667c52c-ea36-40c4-811b-57b0fb3e6204", 00:08:57.848 "is_configured": true, 00:08:57.848 "data_offset": 0, 00:08:57.848 "data_size": 65536 00:08:57.848 }, 00:08:57.848 { 00:08:57.848 "name": "BaseBdev2", 00:08:57.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.848 "is_configured": false, 00:08:57.848 "data_offset": 0, 00:08:57.848 "data_size": 0 00:08:57.848 } 00:08:57.848 ] 00:08:57.848 }' 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.848 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.428 [2024-10-17 20:05:43.946550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.428 [2024-10-17 20:05:43.946616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.428 [2024-10-17 20:05:43.958613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.428 [2024-10-17 20:05:43.961294] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.428 [2024-10-17 20:05:43.961573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.428 20:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.428 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.428 "name": "Existed_Raid", 00:08:58.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.428 "strip_size_kb": 64, 00:08:58.428 "state": "configuring", 00:08:58.428 "raid_level": "raid0", 00:08:58.428 "superblock": false, 00:08:58.428 "num_base_bdevs": 2, 00:08:58.428 "num_base_bdevs_discovered": 1, 00:08:58.428 "num_base_bdevs_operational": 2, 00:08:58.428 "base_bdevs_list": [ 00:08:58.428 { 00:08:58.428 "name": "BaseBdev1", 00:08:58.428 "uuid": "2667c52c-ea36-40c4-811b-57b0fb3e6204", 00:08:58.428 "is_configured": true, 00:08:58.428 "data_offset": 0, 00:08:58.428 "data_size": 65536 00:08:58.428 }, 00:08:58.428 { 00:08:58.428 "name": "BaseBdev2", 00:08:58.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.429 "is_configured": false, 00:08:58.429 "data_offset": 0, 00:08:58.429 "data_size": 0 00:08:58.429 } 00:08:58.429 ] 00:08:58.429 }' 00:08:58.429 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.429 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.995 [2024-10-17 20:05:44.491439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.995 [2024-10-17 20:05:44.491515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:58.995 [2024-10-17 20:05:44.491528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:58.995 [2024-10-17 20:05:44.491825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:58.995 [2024-10-17 20:05:44.492051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:58.995 [2024-10-17 20:05:44.492092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:58.995 [2024-10-17 20:05:44.492497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.995 BaseBdev2 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.995 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.995 [ 00:08:58.995 { 00:08:58.995 "name": "BaseBdev2", 00:08:58.995 "aliases": [ 00:08:58.995 "abbfdf99-d4f1-482e-a2e8-021e7e389106" 00:08:58.995 ], 00:08:58.995 "product_name": "Malloc disk", 00:08:58.995 "block_size": 512, 00:08:58.995 "num_blocks": 65536, 00:08:58.995 "uuid": "abbfdf99-d4f1-482e-a2e8-021e7e389106", 00:08:58.995 "assigned_rate_limits": { 00:08:58.995 "rw_ios_per_sec": 0, 00:08:58.995 "rw_mbytes_per_sec": 0, 00:08:58.995 "r_mbytes_per_sec": 0, 00:08:58.995 "w_mbytes_per_sec": 0 00:08:58.995 }, 00:08:58.995 "claimed": true, 00:08:58.995 "claim_type": "exclusive_write", 00:08:58.995 "zoned": false, 00:08:58.995 "supported_io_types": { 00:08:58.995 "read": true, 00:08:58.995 "write": true, 00:08:58.995 "unmap": true, 00:08:58.995 "flush": true, 00:08:58.995 "reset": true, 00:08:58.995 "nvme_admin": false, 00:08:58.995 "nvme_io": false, 00:08:58.995 "nvme_io_md": false, 00:08:58.995 "write_zeroes": true, 00:08:58.995 "zcopy": true, 00:08:58.995 "get_zone_info": false, 00:08:58.995 "zone_management": false, 00:08:58.995 "zone_append": false, 00:08:58.995 "compare": false, 00:08:58.995 "compare_and_write": false, 00:08:58.995 "abort": true, 00:08:58.995 "seek_hole": false, 00:08:58.995 "seek_data": false, 00:08:58.995 "copy": true, 00:08:58.995 "nvme_iov_md": false 00:08:58.996 }, 00:08:58.996 "memory_domains": [ 00:08:58.996 { 00:08:58.996 "dma_device_id": "system", 00:08:58.996 "dma_device_type": 1 00:08:58.996 }, 00:08:58.996 { 00:08:58.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.996 "dma_device_type": 2 00:08:58.996 } 00:08:58.996 ], 00:08:58.996 "driver_specific": {} 00:08:58.996 } 00:08:58.996 ] 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.996 "name": "Existed_Raid", 00:08:58.996 "uuid": "35b5d20f-872b-4800-8db6-cd4b6e9547a7", 00:08:58.996 "strip_size_kb": 64, 00:08:58.996 "state": "online", 00:08:58.996 "raid_level": "raid0", 00:08:58.996 "superblock": false, 00:08:58.996 "num_base_bdevs": 2, 00:08:58.996 "num_base_bdevs_discovered": 2, 00:08:58.996 "num_base_bdevs_operational": 2, 00:08:58.996 "base_bdevs_list": [ 00:08:58.996 { 00:08:58.996 "name": "BaseBdev1", 00:08:58.996 "uuid": "2667c52c-ea36-40c4-811b-57b0fb3e6204", 00:08:58.996 "is_configured": true, 00:08:58.996 "data_offset": 0, 00:08:58.996 "data_size": 65536 00:08:58.996 }, 00:08:58.996 { 00:08:58.996 "name": "BaseBdev2", 00:08:58.996 "uuid": "abbfdf99-d4f1-482e-a2e8-021e7e389106", 00:08:58.996 "is_configured": true, 00:08:58.996 "data_offset": 0, 00:08:58.996 "data_size": 65536 00:08:58.996 } 00:08:58.996 ] 00:08:58.996 }' 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.996 20:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.563 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:59.563 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:59.563 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.563 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.563 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.563 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.563 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:59.563 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.563 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.563 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.563 [2024-10-17 20:05:45.032017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.563 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.563 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.563 "name": "Existed_Raid", 00:08:59.563 "aliases": [ 00:08:59.563 "35b5d20f-872b-4800-8db6-cd4b6e9547a7" 00:08:59.563 ], 00:08:59.563 "product_name": "Raid Volume", 00:08:59.563 "block_size": 512, 00:08:59.563 "num_blocks": 131072, 00:08:59.563 "uuid": "35b5d20f-872b-4800-8db6-cd4b6e9547a7", 00:08:59.563 "assigned_rate_limits": { 00:08:59.563 "rw_ios_per_sec": 0, 00:08:59.563 "rw_mbytes_per_sec": 0, 00:08:59.563 "r_mbytes_per_sec": 0, 00:08:59.563 "w_mbytes_per_sec": 0 00:08:59.563 }, 00:08:59.563 "claimed": false, 00:08:59.563 "zoned": false, 00:08:59.563 "supported_io_types": { 00:08:59.563 "read": true, 00:08:59.563 "write": true, 00:08:59.563 "unmap": true, 00:08:59.563 "flush": true, 00:08:59.563 "reset": true, 00:08:59.563 "nvme_admin": false, 00:08:59.563 "nvme_io": false, 00:08:59.563 "nvme_io_md": false, 00:08:59.563 "write_zeroes": true, 00:08:59.563 "zcopy": false, 00:08:59.563 "get_zone_info": false, 00:08:59.563 "zone_management": false, 00:08:59.563 "zone_append": false, 00:08:59.563 "compare": false, 00:08:59.563 "compare_and_write": false, 00:08:59.563 "abort": false, 00:08:59.563 "seek_hole": false, 00:08:59.563 "seek_data": false, 00:08:59.563 "copy": false, 00:08:59.563 "nvme_iov_md": false 00:08:59.563 }, 00:08:59.563 "memory_domains": [ 00:08:59.563 { 00:08:59.563 "dma_device_id": "system", 00:08:59.563 "dma_device_type": 1 00:08:59.563 }, 00:08:59.563 { 00:08:59.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.563 "dma_device_type": 2 00:08:59.563 }, 00:08:59.563 { 00:08:59.563 "dma_device_id": "system", 00:08:59.563 "dma_device_type": 1 00:08:59.563 }, 00:08:59.563 { 00:08:59.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.563 "dma_device_type": 2 00:08:59.563 } 00:08:59.563 ], 00:08:59.563 "driver_specific": { 00:08:59.563 "raid": { 00:08:59.563 "uuid": "35b5d20f-872b-4800-8db6-cd4b6e9547a7", 00:08:59.563 "strip_size_kb": 64, 00:08:59.563 "state": "online", 00:08:59.563 "raid_level": "raid0", 00:08:59.563 "superblock": false, 00:08:59.563 "num_base_bdevs": 2, 00:08:59.563 "num_base_bdevs_discovered": 2, 00:08:59.563 "num_base_bdevs_operational": 2, 00:08:59.563 "base_bdevs_list": [ 00:08:59.563 { 00:08:59.563 "name": "BaseBdev1", 00:08:59.563 "uuid": "2667c52c-ea36-40c4-811b-57b0fb3e6204", 00:08:59.563 "is_configured": true, 00:08:59.563 "data_offset": 0, 00:08:59.563 "data_size": 65536 00:08:59.563 }, 00:08:59.563 { 00:08:59.563 "name": "BaseBdev2", 00:08:59.563 "uuid": "abbfdf99-d4f1-482e-a2e8-021e7e389106", 00:08:59.563 "is_configured": true, 00:08:59.563 "data_offset": 0, 00:08:59.563 "data_size": 65536 00:08:59.563 } 00:08:59.564 ] 00:08:59.564 } 00:08:59.564 } 00:08:59.564 }' 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:59.564 BaseBdev2' 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.564 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.822 [2024-10-17 20:05:45.263752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.822 [2024-10-17 20:05:45.263810] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.822 [2024-10-17 20:05:45.263871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.822 "name": "Existed_Raid", 00:08:59.822 "uuid": "35b5d20f-872b-4800-8db6-cd4b6e9547a7", 00:08:59.822 "strip_size_kb": 64, 00:08:59.822 "state": "offline", 00:08:59.822 "raid_level": "raid0", 00:08:59.822 "superblock": false, 00:08:59.822 "num_base_bdevs": 2, 00:08:59.822 "num_base_bdevs_discovered": 1, 00:08:59.822 "num_base_bdevs_operational": 1, 00:08:59.822 "base_bdevs_list": [ 00:08:59.822 { 00:08:59.822 "name": null, 00:08:59.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.822 "is_configured": false, 00:08:59.822 "data_offset": 0, 00:08:59.822 "data_size": 65536 00:08:59.822 }, 00:08:59.822 { 00:08:59.822 "name": "BaseBdev2", 00:08:59.822 "uuid": "abbfdf99-d4f1-482e-a2e8-021e7e389106", 00:08:59.822 "is_configured": true, 00:08:59.822 "data_offset": 0, 00:08:59.822 "data_size": 65536 00:08:59.822 } 00:08:59.822 ] 00:08:59.822 }' 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.822 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.389 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:00.389 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.389 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.389 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.389 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.389 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.389 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.390 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.390 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.390 20:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:00.390 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.390 20:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.390 [2024-10-17 20:05:45.935325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:00.390 [2024-10-17 20:05:45.935431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:00.390 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.390 20:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.390 20:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.390 20:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.390 20:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:00.390 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.390 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.390 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60553 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60553 ']' 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60553 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60553 00:09:00.648 killing process with pid 60553 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60553' 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60553 00:09:00.648 [2024-10-17 20:05:46.115725] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.648 20:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60553 00:09:00.648 [2024-10-17 20:05:46.131291] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:01.584 00:09:01.584 real 0m5.457s 00:09:01.584 user 0m8.235s 00:09:01.584 sys 0m0.761s 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.584 ************************************ 00:09:01.584 END TEST raid_state_function_test 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.584 ************************************ 00:09:01.584 20:05:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:01.584 20:05:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:01.584 20:05:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.584 20:05:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.584 ************************************ 00:09:01.584 START TEST raid_state_function_test_sb 00:09:01.584 ************************************ 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.584 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60806 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:01.585 Process raid pid: 60806 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60806' 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60806 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 60806 ']' 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.585 20:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.843 [2024-10-17 20:05:47.342566] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:09:01.843 [2024-10-17 20:05:47.342783] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.101 [2024-10-17 20:05:47.519652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.101 [2024-10-17 20:05:47.650001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.360 [2024-10-17 20:05:47.841448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.360 [2024-10-17 20:05:47.841532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.927 [2024-10-17 20:05:48.366498] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.927 [2024-10-17 20:05:48.366596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.927 [2024-10-17 20:05:48.366613] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.927 [2024-10-17 20:05:48.366628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.927 "name": "Existed_Raid", 00:09:02.927 "uuid": "7c7076ef-fe0b-4bd3-a1ad-c52577ecebe4", 00:09:02.927 "strip_size_kb": 64, 00:09:02.927 "state": "configuring", 00:09:02.927 "raid_level": "raid0", 00:09:02.927 "superblock": true, 00:09:02.927 "num_base_bdevs": 2, 00:09:02.927 "num_base_bdevs_discovered": 0, 00:09:02.927 "num_base_bdevs_operational": 2, 00:09:02.927 "base_bdevs_list": [ 00:09:02.927 { 00:09:02.927 "name": "BaseBdev1", 00:09:02.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.927 "is_configured": false, 00:09:02.927 "data_offset": 0, 00:09:02.927 "data_size": 0 00:09:02.927 }, 00:09:02.927 { 00:09:02.927 "name": "BaseBdev2", 00:09:02.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.927 "is_configured": false, 00:09:02.927 "data_offset": 0, 00:09:02.927 "data_size": 0 00:09:02.927 } 00:09:02.927 ] 00:09:02.927 }' 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.927 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.495 [2024-10-17 20:05:48.878525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.495 [2024-10-17 20:05:48.878585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.495 [2024-10-17 20:05:48.886542] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.495 [2024-10-17 20:05:48.886608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.495 [2024-10-17 20:05:48.886622] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.495 [2024-10-17 20:05:48.886640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.495 [2024-10-17 20:05:48.932909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.495 BaseBdev1 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.495 [ 00:09:03.495 { 00:09:03.495 "name": "BaseBdev1", 00:09:03.495 "aliases": [ 00:09:03.495 "9f3d49fe-017c-49c7-9b28-de81db0fe225" 00:09:03.495 ], 00:09:03.495 "product_name": "Malloc disk", 00:09:03.495 "block_size": 512, 00:09:03.495 "num_blocks": 65536, 00:09:03.495 "uuid": "9f3d49fe-017c-49c7-9b28-de81db0fe225", 00:09:03.495 "assigned_rate_limits": { 00:09:03.495 "rw_ios_per_sec": 0, 00:09:03.495 "rw_mbytes_per_sec": 0, 00:09:03.495 "r_mbytes_per_sec": 0, 00:09:03.495 "w_mbytes_per_sec": 0 00:09:03.495 }, 00:09:03.495 "claimed": true, 00:09:03.495 "claim_type": "exclusive_write", 00:09:03.495 "zoned": false, 00:09:03.495 "supported_io_types": { 00:09:03.495 "read": true, 00:09:03.495 "write": true, 00:09:03.495 "unmap": true, 00:09:03.495 "flush": true, 00:09:03.495 "reset": true, 00:09:03.495 "nvme_admin": false, 00:09:03.495 "nvme_io": false, 00:09:03.495 "nvme_io_md": false, 00:09:03.495 "write_zeroes": true, 00:09:03.495 "zcopy": true, 00:09:03.495 "get_zone_info": false, 00:09:03.495 "zone_management": false, 00:09:03.495 "zone_append": false, 00:09:03.495 "compare": false, 00:09:03.495 "compare_and_write": false, 00:09:03.495 "abort": true, 00:09:03.495 "seek_hole": false, 00:09:03.495 "seek_data": false, 00:09:03.495 "copy": true, 00:09:03.495 "nvme_iov_md": false 00:09:03.495 }, 00:09:03.495 "memory_domains": [ 00:09:03.495 { 00:09:03.495 "dma_device_id": "system", 00:09:03.495 "dma_device_type": 1 00:09:03.495 }, 00:09:03.495 { 00:09:03.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.495 "dma_device_type": 2 00:09:03.495 } 00:09:03.495 ], 00:09:03.495 "driver_specific": {} 00:09:03.495 } 00:09:03.495 ] 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.495 20:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.495 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.495 "name": "Existed_Raid", 00:09:03.495 "uuid": "ff609409-9dcd-4ab3-92b0-a84bd89c1053", 00:09:03.495 "strip_size_kb": 64, 00:09:03.495 "state": "configuring", 00:09:03.495 "raid_level": "raid0", 00:09:03.495 "superblock": true, 00:09:03.495 "num_base_bdevs": 2, 00:09:03.495 "num_base_bdevs_discovered": 1, 00:09:03.495 "num_base_bdevs_operational": 2, 00:09:03.495 "base_bdevs_list": [ 00:09:03.495 { 00:09:03.495 "name": "BaseBdev1", 00:09:03.495 "uuid": "9f3d49fe-017c-49c7-9b28-de81db0fe225", 00:09:03.495 "is_configured": true, 00:09:03.495 "data_offset": 2048, 00:09:03.495 "data_size": 63488 00:09:03.495 }, 00:09:03.495 { 00:09:03.495 "name": "BaseBdev2", 00:09:03.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.495 "is_configured": false, 00:09:03.495 "data_offset": 0, 00:09:03.495 "data_size": 0 00:09:03.495 } 00:09:03.495 ] 00:09:03.495 }' 00:09:03.495 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.495 20:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.148 [2024-10-17 20:05:49.485166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.148 [2024-10-17 20:05:49.485233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.148 [2024-10-17 20:05:49.493236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.148 [2024-10-17 20:05:49.495736] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.148 [2024-10-17 20:05:49.495804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.148 "name": "Existed_Raid", 00:09:04.148 "uuid": "f018c806-220f-43c8-9ae6-2151a598580e", 00:09:04.148 "strip_size_kb": 64, 00:09:04.148 "state": "configuring", 00:09:04.148 "raid_level": "raid0", 00:09:04.148 "superblock": true, 00:09:04.148 "num_base_bdevs": 2, 00:09:04.148 "num_base_bdevs_discovered": 1, 00:09:04.148 "num_base_bdevs_operational": 2, 00:09:04.148 "base_bdevs_list": [ 00:09:04.148 { 00:09:04.148 "name": "BaseBdev1", 00:09:04.148 "uuid": "9f3d49fe-017c-49c7-9b28-de81db0fe225", 00:09:04.148 "is_configured": true, 00:09:04.148 "data_offset": 2048, 00:09:04.148 "data_size": 63488 00:09:04.148 }, 00:09:04.148 { 00:09:04.148 "name": "BaseBdev2", 00:09:04.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.148 "is_configured": false, 00:09:04.148 "data_offset": 0, 00:09:04.148 "data_size": 0 00:09:04.148 } 00:09:04.148 ] 00:09:04.148 }' 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.148 20:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.407 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:04.407 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.407 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.666 [2024-10-17 20:05:50.069168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.666 [2024-10-17 20:05:50.069564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:04.666 [2024-10-17 20:05:50.069583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:04.666 [2024-10-17 20:05:50.069965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:04.666 BaseBdev2 00:09:04.666 [2024-10-17 20:05:50.070209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:04.666 [2024-10-17 20:05:50.070237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:04.666 [2024-10-17 20:05:50.070414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.666 [ 00:09:04.666 { 00:09:04.666 "name": "BaseBdev2", 00:09:04.666 "aliases": [ 00:09:04.666 "a67077db-78c2-4203-bd3b-6a91dd62ed87" 00:09:04.666 ], 00:09:04.666 "product_name": "Malloc disk", 00:09:04.666 "block_size": 512, 00:09:04.666 "num_blocks": 65536, 00:09:04.666 "uuid": "a67077db-78c2-4203-bd3b-6a91dd62ed87", 00:09:04.666 "assigned_rate_limits": { 00:09:04.666 "rw_ios_per_sec": 0, 00:09:04.666 "rw_mbytes_per_sec": 0, 00:09:04.666 "r_mbytes_per_sec": 0, 00:09:04.666 "w_mbytes_per_sec": 0 00:09:04.666 }, 00:09:04.666 "claimed": true, 00:09:04.666 "claim_type": "exclusive_write", 00:09:04.666 "zoned": false, 00:09:04.666 "supported_io_types": { 00:09:04.666 "read": true, 00:09:04.666 "write": true, 00:09:04.666 "unmap": true, 00:09:04.666 "flush": true, 00:09:04.666 "reset": true, 00:09:04.666 "nvme_admin": false, 00:09:04.666 "nvme_io": false, 00:09:04.666 "nvme_io_md": false, 00:09:04.666 "write_zeroes": true, 00:09:04.666 "zcopy": true, 00:09:04.666 "get_zone_info": false, 00:09:04.666 "zone_management": false, 00:09:04.666 "zone_append": false, 00:09:04.666 "compare": false, 00:09:04.666 "compare_and_write": false, 00:09:04.666 "abort": true, 00:09:04.666 "seek_hole": false, 00:09:04.666 "seek_data": false, 00:09:04.666 "copy": true, 00:09:04.666 "nvme_iov_md": false 00:09:04.666 }, 00:09:04.666 "memory_domains": [ 00:09:04.666 { 00:09:04.666 "dma_device_id": "system", 00:09:04.666 "dma_device_type": 1 00:09:04.666 }, 00:09:04.666 { 00:09:04.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.666 "dma_device_type": 2 00:09:04.666 } 00:09:04.666 ], 00:09:04.666 "driver_specific": {} 00:09:04.666 } 00:09:04.666 ] 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.666 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.666 "name": "Existed_Raid", 00:09:04.666 "uuid": "f018c806-220f-43c8-9ae6-2151a598580e", 00:09:04.666 "strip_size_kb": 64, 00:09:04.666 "state": "online", 00:09:04.667 "raid_level": "raid0", 00:09:04.667 "superblock": true, 00:09:04.667 "num_base_bdevs": 2, 00:09:04.667 "num_base_bdevs_discovered": 2, 00:09:04.667 "num_base_bdevs_operational": 2, 00:09:04.667 "base_bdevs_list": [ 00:09:04.667 { 00:09:04.667 "name": "BaseBdev1", 00:09:04.667 "uuid": "9f3d49fe-017c-49c7-9b28-de81db0fe225", 00:09:04.667 "is_configured": true, 00:09:04.667 "data_offset": 2048, 00:09:04.667 "data_size": 63488 00:09:04.667 }, 00:09:04.667 { 00:09:04.667 "name": "BaseBdev2", 00:09:04.667 "uuid": "a67077db-78c2-4203-bd3b-6a91dd62ed87", 00:09:04.667 "is_configured": true, 00:09:04.667 "data_offset": 2048, 00:09:04.667 "data_size": 63488 00:09:04.667 } 00:09:04.667 ] 00:09:04.667 }' 00:09:04.667 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.667 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.235 [2024-10-17 20:05:50.649797] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.235 "name": "Existed_Raid", 00:09:05.235 "aliases": [ 00:09:05.235 "f018c806-220f-43c8-9ae6-2151a598580e" 00:09:05.235 ], 00:09:05.235 "product_name": "Raid Volume", 00:09:05.235 "block_size": 512, 00:09:05.235 "num_blocks": 126976, 00:09:05.235 "uuid": "f018c806-220f-43c8-9ae6-2151a598580e", 00:09:05.235 "assigned_rate_limits": { 00:09:05.235 "rw_ios_per_sec": 0, 00:09:05.235 "rw_mbytes_per_sec": 0, 00:09:05.235 "r_mbytes_per_sec": 0, 00:09:05.235 "w_mbytes_per_sec": 0 00:09:05.235 }, 00:09:05.235 "claimed": false, 00:09:05.235 "zoned": false, 00:09:05.235 "supported_io_types": { 00:09:05.235 "read": true, 00:09:05.235 "write": true, 00:09:05.235 "unmap": true, 00:09:05.235 "flush": true, 00:09:05.235 "reset": true, 00:09:05.235 "nvme_admin": false, 00:09:05.235 "nvme_io": false, 00:09:05.235 "nvme_io_md": false, 00:09:05.235 "write_zeroes": true, 00:09:05.235 "zcopy": false, 00:09:05.235 "get_zone_info": false, 00:09:05.235 "zone_management": false, 00:09:05.235 "zone_append": false, 00:09:05.235 "compare": false, 00:09:05.235 "compare_and_write": false, 00:09:05.235 "abort": false, 00:09:05.235 "seek_hole": false, 00:09:05.235 "seek_data": false, 00:09:05.235 "copy": false, 00:09:05.235 "nvme_iov_md": false 00:09:05.235 }, 00:09:05.235 "memory_domains": [ 00:09:05.235 { 00:09:05.235 "dma_device_id": "system", 00:09:05.235 "dma_device_type": 1 00:09:05.235 }, 00:09:05.235 { 00:09:05.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.235 "dma_device_type": 2 00:09:05.235 }, 00:09:05.235 { 00:09:05.235 "dma_device_id": "system", 00:09:05.235 "dma_device_type": 1 00:09:05.235 }, 00:09:05.235 { 00:09:05.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.235 "dma_device_type": 2 00:09:05.235 } 00:09:05.235 ], 00:09:05.235 "driver_specific": { 00:09:05.235 "raid": { 00:09:05.235 "uuid": "f018c806-220f-43c8-9ae6-2151a598580e", 00:09:05.235 "strip_size_kb": 64, 00:09:05.235 "state": "online", 00:09:05.235 "raid_level": "raid0", 00:09:05.235 "superblock": true, 00:09:05.235 "num_base_bdevs": 2, 00:09:05.235 "num_base_bdevs_discovered": 2, 00:09:05.235 "num_base_bdevs_operational": 2, 00:09:05.235 "base_bdevs_list": [ 00:09:05.235 { 00:09:05.235 "name": "BaseBdev1", 00:09:05.235 "uuid": "9f3d49fe-017c-49c7-9b28-de81db0fe225", 00:09:05.235 "is_configured": true, 00:09:05.235 "data_offset": 2048, 00:09:05.235 "data_size": 63488 00:09:05.235 }, 00:09:05.235 { 00:09:05.235 "name": "BaseBdev2", 00:09:05.235 "uuid": "a67077db-78c2-4203-bd3b-6a91dd62ed87", 00:09:05.235 "is_configured": true, 00:09:05.235 "data_offset": 2048, 00:09:05.235 "data_size": 63488 00:09:05.235 } 00:09:05.235 ] 00:09:05.235 } 00:09:05.235 } 00:09:05.235 }' 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:05.235 BaseBdev2' 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.235 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.495 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.495 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.495 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.495 20:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:05.495 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.495 20:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.495 [2024-10-17 20:05:50.929574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.495 [2024-10-17 20:05:50.929626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.495 [2024-10-17 20:05:50.929694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.495 "name": "Existed_Raid", 00:09:05.495 "uuid": "f018c806-220f-43c8-9ae6-2151a598580e", 00:09:05.495 "strip_size_kb": 64, 00:09:05.495 "state": "offline", 00:09:05.495 "raid_level": "raid0", 00:09:05.495 "superblock": true, 00:09:05.495 "num_base_bdevs": 2, 00:09:05.495 "num_base_bdevs_discovered": 1, 00:09:05.495 "num_base_bdevs_operational": 1, 00:09:05.495 "base_bdevs_list": [ 00:09:05.495 { 00:09:05.495 "name": null, 00:09:05.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.495 "is_configured": false, 00:09:05.495 "data_offset": 0, 00:09:05.495 "data_size": 63488 00:09:05.495 }, 00:09:05.495 { 00:09:05.495 "name": "BaseBdev2", 00:09:05.495 "uuid": "a67077db-78c2-4203-bd3b-6a91dd62ed87", 00:09:05.495 "is_configured": true, 00:09:05.495 "data_offset": 2048, 00:09:05.495 "data_size": 63488 00:09:05.495 } 00:09:05.495 ] 00:09:05.495 }' 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.495 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.061 [2024-10-17 20:05:51.595195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:06.061 [2024-10-17 20:05:51.595273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.061 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60806 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 60806 ']' 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 60806 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60806 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:06.320 killing process with pid 60806 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60806' 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 60806 00:09:06.320 [2024-10-17 20:05:51.763732] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.320 20:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 60806 00:09:06.320 [2024-10-17 20:05:51.778903] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.255 20:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:07.255 00:09:07.255 real 0m5.508s 00:09:07.255 user 0m8.385s 00:09:07.255 sys 0m0.807s 00:09:07.255 20:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.255 20:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.255 ************************************ 00:09:07.255 END TEST raid_state_function_test_sb 00:09:07.255 ************************************ 00:09:07.255 20:05:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:07.255 20:05:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:07.255 20:05:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.255 20:05:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.255 ************************************ 00:09:07.255 START TEST raid_superblock_test 00:09:07.255 ************************************ 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61064 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61064 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61064 ']' 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.255 20:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.255 [2024-10-17 20:05:52.875536] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:09:07.255 [2024-10-17 20:05:52.875729] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61064 ] 00:09:07.513 [2024-10-17 20:05:53.034285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.513 [2024-10-17 20:05:53.160226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.772 [2024-10-17 20:05:53.347346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.772 [2024-10-17 20:05:53.347431] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.339 malloc1 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.339 [2024-10-17 20:05:53.952635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:08.339 [2024-10-17 20:05:53.952751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.339 [2024-10-17 20:05:53.952784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:08.339 [2024-10-17 20:05:53.952799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.339 [2024-10-17 20:05:53.955856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.339 [2024-10-17 20:05:53.955902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:08.339 pt1 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.339 20:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.599 malloc2 00:09:08.599 20:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.599 20:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:08.599 20:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.599 [2024-10-17 20:05:54.005942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:08.599 [2024-10-17 20:05:54.006243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.599 [2024-10-17 20:05:54.006326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:08.599 [2024-10-17 20:05:54.006603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.599 [2024-10-17 20:05:54.009451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.599 [2024-10-17 20:05:54.009657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:08.599 pt2 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.599 [2024-10-17 20:05:54.018100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:08.599 [2024-10-17 20:05:54.020691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:08.599 [2024-10-17 20:05:54.020900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:08.599 [2024-10-17 20:05:54.020918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:08.599 [2024-10-17 20:05:54.021380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:08.599 [2024-10-17 20:05:54.021651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:08.599 [2024-10-17 20:05:54.021706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:08.599 [2024-10-17 20:05:54.022033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.599 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.600 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.600 "name": "raid_bdev1", 00:09:08.600 "uuid": "1914c7bb-d7b0-4fa5-ada8-dced3bdfc33d", 00:09:08.600 "strip_size_kb": 64, 00:09:08.600 "state": "online", 00:09:08.600 "raid_level": "raid0", 00:09:08.600 "superblock": true, 00:09:08.600 "num_base_bdevs": 2, 00:09:08.600 "num_base_bdevs_discovered": 2, 00:09:08.600 "num_base_bdevs_operational": 2, 00:09:08.600 "base_bdevs_list": [ 00:09:08.600 { 00:09:08.600 "name": "pt1", 00:09:08.600 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.600 "is_configured": true, 00:09:08.600 "data_offset": 2048, 00:09:08.600 "data_size": 63488 00:09:08.600 }, 00:09:08.600 { 00:09:08.600 "name": "pt2", 00:09:08.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.600 "is_configured": true, 00:09:08.600 "data_offset": 2048, 00:09:08.600 "data_size": 63488 00:09:08.600 } 00:09:08.600 ] 00:09:08.600 }' 00:09:08.600 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.600 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.167 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:09.167 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:09.167 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.167 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.167 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.167 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.167 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.167 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.167 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.167 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.167 [2024-10-17 20:05:54.566602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.167 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.167 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.167 "name": "raid_bdev1", 00:09:09.167 "aliases": [ 00:09:09.167 "1914c7bb-d7b0-4fa5-ada8-dced3bdfc33d" 00:09:09.167 ], 00:09:09.167 "product_name": "Raid Volume", 00:09:09.167 "block_size": 512, 00:09:09.167 "num_blocks": 126976, 00:09:09.167 "uuid": "1914c7bb-d7b0-4fa5-ada8-dced3bdfc33d", 00:09:09.167 "assigned_rate_limits": { 00:09:09.167 "rw_ios_per_sec": 0, 00:09:09.167 "rw_mbytes_per_sec": 0, 00:09:09.167 "r_mbytes_per_sec": 0, 00:09:09.167 "w_mbytes_per_sec": 0 00:09:09.167 }, 00:09:09.167 "claimed": false, 00:09:09.167 "zoned": false, 00:09:09.167 "supported_io_types": { 00:09:09.167 "read": true, 00:09:09.167 "write": true, 00:09:09.167 "unmap": true, 00:09:09.167 "flush": true, 00:09:09.167 "reset": true, 00:09:09.167 "nvme_admin": false, 00:09:09.167 "nvme_io": false, 00:09:09.167 "nvme_io_md": false, 00:09:09.167 "write_zeroes": true, 00:09:09.167 "zcopy": false, 00:09:09.167 "get_zone_info": false, 00:09:09.167 "zone_management": false, 00:09:09.167 "zone_append": false, 00:09:09.167 "compare": false, 00:09:09.167 "compare_and_write": false, 00:09:09.167 "abort": false, 00:09:09.167 "seek_hole": false, 00:09:09.167 "seek_data": false, 00:09:09.167 "copy": false, 00:09:09.167 "nvme_iov_md": false 00:09:09.167 }, 00:09:09.167 "memory_domains": [ 00:09:09.167 { 00:09:09.167 "dma_device_id": "system", 00:09:09.167 "dma_device_type": 1 00:09:09.167 }, 00:09:09.167 { 00:09:09.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.167 "dma_device_type": 2 00:09:09.168 }, 00:09:09.168 { 00:09:09.168 "dma_device_id": "system", 00:09:09.168 "dma_device_type": 1 00:09:09.168 }, 00:09:09.168 { 00:09:09.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.168 "dma_device_type": 2 00:09:09.168 } 00:09:09.168 ], 00:09:09.168 "driver_specific": { 00:09:09.168 "raid": { 00:09:09.168 "uuid": "1914c7bb-d7b0-4fa5-ada8-dced3bdfc33d", 00:09:09.168 "strip_size_kb": 64, 00:09:09.168 "state": "online", 00:09:09.168 "raid_level": "raid0", 00:09:09.168 "superblock": true, 00:09:09.168 "num_base_bdevs": 2, 00:09:09.168 "num_base_bdevs_discovered": 2, 00:09:09.168 "num_base_bdevs_operational": 2, 00:09:09.168 "base_bdevs_list": [ 00:09:09.168 { 00:09:09.168 "name": "pt1", 00:09:09.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.168 "is_configured": true, 00:09:09.168 "data_offset": 2048, 00:09:09.168 "data_size": 63488 00:09:09.168 }, 00:09:09.168 { 00:09:09.168 "name": "pt2", 00:09:09.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.168 "is_configured": true, 00:09:09.168 "data_offset": 2048, 00:09:09.168 "data_size": 63488 00:09:09.168 } 00:09:09.168 ] 00:09:09.168 } 00:09:09.168 } 00:09:09.168 }' 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:09.168 pt2' 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.168 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.427 [2024-10-17 20:05:54.842636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1914c7bb-d7b0-4fa5-ada8-dced3bdfc33d 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1914c7bb-d7b0-4fa5-ada8-dced3bdfc33d ']' 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.427 [2024-10-17 20:05:54.894356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.427 [2024-10-17 20:05:54.894580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.427 [2024-10-17 20:05:54.894711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.427 [2024-10-17 20:05:54.894792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.427 [2024-10-17 20:05:54.894813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.427 20:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.427 [2024-10-17 20:05:55.042446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:09.427 [2024-10-17 20:05:55.045063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:09.427 [2024-10-17 20:05:55.045177] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:09.427 [2024-10-17 20:05:55.045255] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:09.427 [2024-10-17 20:05:55.045322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.427 [2024-10-17 20:05:55.045339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:09.427 request: 00:09:09.427 { 00:09:09.427 "name": "raid_bdev1", 00:09:09.427 "raid_level": "raid0", 00:09:09.427 "base_bdevs": [ 00:09:09.427 "malloc1", 00:09:09.427 "malloc2" 00:09:09.427 ], 00:09:09.427 "strip_size_kb": 64, 00:09:09.427 "superblock": false, 00:09:09.427 "method": "bdev_raid_create", 00:09:09.427 "req_id": 1 00:09:09.427 } 00:09:09.427 Got JSON-RPC error response 00:09:09.427 response: 00:09:09.427 { 00:09:09.427 "code": -17, 00:09:09.427 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:09.427 } 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.427 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.428 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.686 [2024-10-17 20:05:55.110435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:09.686 [2024-10-17 20:05:55.110532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.686 [2024-10-17 20:05:55.110563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:09.686 [2024-10-17 20:05:55.110580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.686 [2024-10-17 20:05:55.113651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.686 [2024-10-17 20:05:55.113720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:09.686 [2024-10-17 20:05:55.113863] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:09.686 [2024-10-17 20:05:55.113940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:09.686 pt1 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.686 "name": "raid_bdev1", 00:09:09.686 "uuid": "1914c7bb-d7b0-4fa5-ada8-dced3bdfc33d", 00:09:09.686 "strip_size_kb": 64, 00:09:09.686 "state": "configuring", 00:09:09.686 "raid_level": "raid0", 00:09:09.686 "superblock": true, 00:09:09.686 "num_base_bdevs": 2, 00:09:09.686 "num_base_bdevs_discovered": 1, 00:09:09.686 "num_base_bdevs_operational": 2, 00:09:09.686 "base_bdevs_list": [ 00:09:09.686 { 00:09:09.686 "name": "pt1", 00:09:09.686 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.686 "is_configured": true, 00:09:09.686 "data_offset": 2048, 00:09:09.686 "data_size": 63488 00:09:09.686 }, 00:09:09.686 { 00:09:09.686 "name": null, 00:09:09.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.686 "is_configured": false, 00:09:09.686 "data_offset": 2048, 00:09:09.686 "data_size": 63488 00:09:09.686 } 00:09:09.686 ] 00:09:09.686 }' 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.686 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.252 [2024-10-17 20:05:55.630577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:10.252 [2024-10-17 20:05:55.630840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.252 [2024-10-17 20:05:55.630880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:10.252 [2024-10-17 20:05:55.630899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.252 [2024-10-17 20:05:55.631649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.252 [2024-10-17 20:05:55.631688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:10.252 [2024-10-17 20:05:55.631793] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:10.252 [2024-10-17 20:05:55.631829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:10.252 [2024-10-17 20:05:55.631982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:10.252 [2024-10-17 20:05:55.632019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:10.252 [2024-10-17 20:05:55.632347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:10.252 [2024-10-17 20:05:55.632580] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:10.252 [2024-10-17 20:05:55.632597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:10.252 [2024-10-17 20:05:55.632758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.252 pt2 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.252 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.253 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:10.253 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.253 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.253 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.253 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.253 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.253 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.253 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.253 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.253 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.253 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.253 "name": "raid_bdev1", 00:09:10.253 "uuid": "1914c7bb-d7b0-4fa5-ada8-dced3bdfc33d", 00:09:10.253 "strip_size_kb": 64, 00:09:10.253 "state": "online", 00:09:10.253 "raid_level": "raid0", 00:09:10.253 "superblock": true, 00:09:10.253 "num_base_bdevs": 2, 00:09:10.253 "num_base_bdevs_discovered": 2, 00:09:10.253 "num_base_bdevs_operational": 2, 00:09:10.253 "base_bdevs_list": [ 00:09:10.253 { 00:09:10.253 "name": "pt1", 00:09:10.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.253 "is_configured": true, 00:09:10.253 "data_offset": 2048, 00:09:10.253 "data_size": 63488 00:09:10.253 }, 00:09:10.253 { 00:09:10.253 "name": "pt2", 00:09:10.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.253 "is_configured": true, 00:09:10.253 "data_offset": 2048, 00:09:10.253 "data_size": 63488 00:09:10.253 } 00:09:10.253 ] 00:09:10.253 }' 00:09:10.253 20:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.253 20:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.511 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:10.511 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:10.511 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.511 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.511 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.511 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.769 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:10.769 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.769 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.769 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.769 [2024-10-17 20:05:56.171098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.769 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.769 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.769 "name": "raid_bdev1", 00:09:10.769 "aliases": [ 00:09:10.769 "1914c7bb-d7b0-4fa5-ada8-dced3bdfc33d" 00:09:10.769 ], 00:09:10.769 "product_name": "Raid Volume", 00:09:10.769 "block_size": 512, 00:09:10.769 "num_blocks": 126976, 00:09:10.769 "uuid": "1914c7bb-d7b0-4fa5-ada8-dced3bdfc33d", 00:09:10.769 "assigned_rate_limits": { 00:09:10.769 "rw_ios_per_sec": 0, 00:09:10.769 "rw_mbytes_per_sec": 0, 00:09:10.769 "r_mbytes_per_sec": 0, 00:09:10.769 "w_mbytes_per_sec": 0 00:09:10.769 }, 00:09:10.769 "claimed": false, 00:09:10.769 "zoned": false, 00:09:10.769 "supported_io_types": { 00:09:10.769 "read": true, 00:09:10.769 "write": true, 00:09:10.769 "unmap": true, 00:09:10.769 "flush": true, 00:09:10.769 "reset": true, 00:09:10.769 "nvme_admin": false, 00:09:10.769 "nvme_io": false, 00:09:10.769 "nvme_io_md": false, 00:09:10.769 "write_zeroes": true, 00:09:10.769 "zcopy": false, 00:09:10.769 "get_zone_info": false, 00:09:10.769 "zone_management": false, 00:09:10.770 "zone_append": false, 00:09:10.770 "compare": false, 00:09:10.770 "compare_and_write": false, 00:09:10.770 "abort": false, 00:09:10.770 "seek_hole": false, 00:09:10.770 "seek_data": false, 00:09:10.770 "copy": false, 00:09:10.770 "nvme_iov_md": false 00:09:10.770 }, 00:09:10.770 "memory_domains": [ 00:09:10.770 { 00:09:10.770 "dma_device_id": "system", 00:09:10.770 "dma_device_type": 1 00:09:10.770 }, 00:09:10.770 { 00:09:10.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.770 "dma_device_type": 2 00:09:10.770 }, 00:09:10.770 { 00:09:10.770 "dma_device_id": "system", 00:09:10.770 "dma_device_type": 1 00:09:10.770 }, 00:09:10.770 { 00:09:10.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.770 "dma_device_type": 2 00:09:10.770 } 00:09:10.770 ], 00:09:10.770 "driver_specific": { 00:09:10.770 "raid": { 00:09:10.770 "uuid": "1914c7bb-d7b0-4fa5-ada8-dced3bdfc33d", 00:09:10.770 "strip_size_kb": 64, 00:09:10.770 "state": "online", 00:09:10.770 "raid_level": "raid0", 00:09:10.770 "superblock": true, 00:09:10.770 "num_base_bdevs": 2, 00:09:10.770 "num_base_bdevs_discovered": 2, 00:09:10.770 "num_base_bdevs_operational": 2, 00:09:10.770 "base_bdevs_list": [ 00:09:10.770 { 00:09:10.770 "name": "pt1", 00:09:10.770 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.770 "is_configured": true, 00:09:10.770 "data_offset": 2048, 00:09:10.770 "data_size": 63488 00:09:10.770 }, 00:09:10.770 { 00:09:10.770 "name": "pt2", 00:09:10.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.770 "is_configured": true, 00:09:10.770 "data_offset": 2048, 00:09:10.770 "data_size": 63488 00:09:10.770 } 00:09:10.770 ] 00:09:10.770 } 00:09:10.770 } 00:09:10.770 }' 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:10.770 pt2' 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.770 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.028 [2024-10-17 20:05:56.443245] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1914c7bb-d7b0-4fa5-ada8-dced3bdfc33d '!=' 1914c7bb-d7b0-4fa5-ada8-dced3bdfc33d ']' 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61064 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61064 ']' 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61064 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61064 00:09:11.028 killing process with pid 61064 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61064' 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61064 00:09:11.028 [2024-10-17 20:05:56.525172] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.028 20:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61064 00:09:11.028 [2024-10-17 20:05:56.525298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.028 [2024-10-17 20:05:56.525378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.028 [2024-10-17 20:05:56.525412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:11.287 [2024-10-17 20:05:56.698545] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.222 20:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:12.222 00:09:12.222 real 0m4.869s 00:09:12.222 user 0m7.256s 00:09:12.222 sys 0m0.724s 00:09:12.222 20:05:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.222 20:05:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.222 ************************************ 00:09:12.222 END TEST raid_superblock_test 00:09:12.222 ************************************ 00:09:12.222 20:05:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:12.222 20:05:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:12.222 20:05:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.222 20:05:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.222 ************************************ 00:09:12.222 START TEST raid_read_error_test 00:09:12.222 ************************************ 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NzhSzbKj8p 00:09:12.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61275 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61275 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61275 ']' 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.222 20:05:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.222 [2024-10-17 20:05:57.820129] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:09:12.222 [2024-10-17 20:05:57.820669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61275 ] 00:09:12.481 [2024-10-17 20:05:57.983393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.481 [2024-10-17 20:05:58.107673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.740 [2024-10-17 20:05:58.298087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.740 [2024-10-17 20:05:58.298209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.307 BaseBdev1_malloc 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.307 true 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.307 [2024-10-17 20:05:58.847636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:13.307 [2024-10-17 20:05:58.847720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.307 [2024-10-17 20:05:58.847749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:13.307 [2024-10-17 20:05:58.847766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.307 [2024-10-17 20:05:58.850655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.307 [2024-10-17 20:05:58.850872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:13.307 BaseBdev1 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.307 BaseBdev2_malloc 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.307 true 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.307 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.307 [2024-10-17 20:05:58.908623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:13.308 [2024-10-17 20:05:58.908719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.308 [2024-10-17 20:05:58.908744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:13.308 [2024-10-17 20:05:58.908760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.308 [2024-10-17 20:05:58.911647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.308 [2024-10-17 20:05:58.911711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:13.308 BaseBdev2 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.308 [2024-10-17 20:05:58.920773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.308 [2024-10-17 20:05:58.923568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.308 [2024-10-17 20:05:58.923873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:13.308 [2024-10-17 20:05:58.923899] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:13.308 [2024-10-17 20:05:58.924394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:13.308 [2024-10-17 20:05:58.924771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:13.308 [2024-10-17 20:05:58.924923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:13.308 [2024-10-17 20:05:58.925250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.308 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.566 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.566 "name": "raid_bdev1", 00:09:13.566 "uuid": "b4eedae7-d8ca-4fbe-9721-249379b0595c", 00:09:13.566 "strip_size_kb": 64, 00:09:13.566 "state": "online", 00:09:13.566 "raid_level": "raid0", 00:09:13.566 "superblock": true, 00:09:13.566 "num_base_bdevs": 2, 00:09:13.566 "num_base_bdevs_discovered": 2, 00:09:13.566 "num_base_bdevs_operational": 2, 00:09:13.566 "base_bdevs_list": [ 00:09:13.566 { 00:09:13.566 "name": "BaseBdev1", 00:09:13.566 "uuid": "35cf3e5f-ab36-5bf2-ba0c-5743cd6898c0", 00:09:13.566 "is_configured": true, 00:09:13.566 "data_offset": 2048, 00:09:13.566 "data_size": 63488 00:09:13.566 }, 00:09:13.566 { 00:09:13.566 "name": "BaseBdev2", 00:09:13.566 "uuid": "9a557d3c-3a92-5360-8fc1-ad91fd57bb4b", 00:09:13.566 "is_configured": true, 00:09:13.566 "data_offset": 2048, 00:09:13.566 "data_size": 63488 00:09:13.566 } 00:09:13.566 ] 00:09:13.566 }' 00:09:13.566 20:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.566 20:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.824 20:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:13.824 20:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:14.082 [2024-10-17 20:05:59.608321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.016 "name": "raid_bdev1", 00:09:15.016 "uuid": "b4eedae7-d8ca-4fbe-9721-249379b0595c", 00:09:15.016 "strip_size_kb": 64, 00:09:15.016 "state": "online", 00:09:15.016 "raid_level": "raid0", 00:09:15.016 "superblock": true, 00:09:15.016 "num_base_bdevs": 2, 00:09:15.016 "num_base_bdevs_discovered": 2, 00:09:15.016 "num_base_bdevs_operational": 2, 00:09:15.016 "base_bdevs_list": [ 00:09:15.016 { 00:09:15.016 "name": "BaseBdev1", 00:09:15.016 "uuid": "35cf3e5f-ab36-5bf2-ba0c-5743cd6898c0", 00:09:15.016 "is_configured": true, 00:09:15.016 "data_offset": 2048, 00:09:15.016 "data_size": 63488 00:09:15.016 }, 00:09:15.016 { 00:09:15.016 "name": "BaseBdev2", 00:09:15.016 "uuid": "9a557d3c-3a92-5360-8fc1-ad91fd57bb4b", 00:09:15.016 "is_configured": true, 00:09:15.016 "data_offset": 2048, 00:09:15.016 "data_size": 63488 00:09:15.016 } 00:09:15.016 ] 00:09:15.016 }' 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.016 20:06:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.583 20:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:15.583 20:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.583 20:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.583 [2024-10-17 20:06:01.019275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.583 [2024-10-17 20:06:01.019318] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.583 [2024-10-17 20:06:01.022876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.583 [2024-10-17 20:06:01.023116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.583 [2024-10-17 20:06:01.023288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.583 [2024-10-17 20:06:01.023519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:15.583 { 00:09:15.583 "results": [ 00:09:15.583 { 00:09:15.583 "job": "raid_bdev1", 00:09:15.583 "core_mask": "0x1", 00:09:15.583 "workload": "randrw", 00:09:15.583 "percentage": 50, 00:09:15.583 "status": "finished", 00:09:15.583 "queue_depth": 1, 00:09:15.584 "io_size": 131072, 00:09:15.584 "runtime": 1.408214, 00:09:15.584 "iops": 10832.870572228368, 00:09:15.584 "mibps": 1354.108821528546, 00:09:15.584 "io_failed": 1, 00:09:15.584 "io_timeout": 0, 00:09:15.584 "avg_latency_us": 129.49619869380751, 00:09:15.584 "min_latency_us": 37.93454545454546, 00:09:15.584 "max_latency_us": 1966.08 00:09:15.584 } 00:09:15.584 ], 00:09:15.584 "core_count": 1 00:09:15.584 } 00:09:15.584 20:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.584 20:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61275 00:09:15.584 20:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61275 ']' 00:09:15.584 20:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61275 00:09:15.584 20:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:15.584 20:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.584 20:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61275 00:09:15.584 killing process with pid 61275 00:09:15.584 20:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.584 20:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.584 20:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61275' 00:09:15.584 20:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61275 00:09:15.584 [2024-10-17 20:06:01.063074] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.584 20:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61275 00:09:15.584 [2024-10-17 20:06:01.181683] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.957 20:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NzhSzbKj8p 00:09:16.957 20:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:16.957 20:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:16.957 20:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:16.957 20:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:16.957 20:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.957 20:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.957 20:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:16.957 00:09:16.957 real 0m4.496s 00:09:16.957 user 0m5.688s 00:09:16.957 sys 0m0.540s 00:09:16.957 20:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.957 ************************************ 00:09:16.957 END TEST raid_read_error_test 00:09:16.957 ************************************ 00:09:16.957 20:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.957 20:06:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:16.957 20:06:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:16.957 20:06:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.957 20:06:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.957 ************************************ 00:09:16.957 START TEST raid_write_error_test 00:09:16.957 ************************************ 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BEhnqYFxi8 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61421 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61421 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61421 ']' 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.957 20:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.957 [2024-10-17 20:06:02.386192] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:09:16.957 [2024-10-17 20:06:02.386414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61421 ] 00:09:16.958 [2024-10-17 20:06:02.564701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.216 [2024-10-17 20:06:02.693787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.475 [2024-10-17 20:06:02.900290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.475 [2024-10-17 20:06:02.900370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.734 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.734 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:17.734 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.734 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:17.734 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.734 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 BaseBdev1_malloc 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 true 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 [2024-10-17 20:06:03.441892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:17.993 [2024-10-17 20:06:03.441965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.993 [2024-10-17 20:06:03.442033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:17.993 [2024-10-17 20:06:03.442057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.993 [2024-10-17 20:06:03.444825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.993 [2024-10-17 20:06:03.445049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:17.993 BaseBdev1 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 BaseBdev2_malloc 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 true 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.993 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 [2024-10-17 20:06:03.499698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:17.993 [2024-10-17 20:06:03.499922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.993 [2024-10-17 20:06:03.499962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:17.993 [2024-10-17 20:06:03.499982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.993 [2024-10-17 20:06:03.502918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.993 [2024-10-17 20:06:03.503134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:17.993 BaseBdev2 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.994 [2024-10-17 20:06:03.507817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.994 [2024-10-17 20:06:03.510345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.994 [2024-10-17 20:06:03.510601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:17.994 [2024-10-17 20:06:03.510627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:17.994 [2024-10-17 20:06:03.510907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:17.994 [2024-10-17 20:06:03.511196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:17.994 [2024-10-17 20:06:03.511214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:17.994 [2024-10-17 20:06:03.511409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.994 "name": "raid_bdev1", 00:09:17.994 "uuid": "50883a37-e350-4a83-bcbe-1ab5a18efa36", 00:09:17.994 "strip_size_kb": 64, 00:09:17.994 "state": "online", 00:09:17.994 "raid_level": "raid0", 00:09:17.994 "superblock": true, 00:09:17.994 "num_base_bdevs": 2, 00:09:17.994 "num_base_bdevs_discovered": 2, 00:09:17.994 "num_base_bdevs_operational": 2, 00:09:17.994 "base_bdevs_list": [ 00:09:17.994 { 00:09:17.994 "name": "BaseBdev1", 00:09:17.994 "uuid": "ef0d4f51-161d-512e-ae56-1b1101261a1e", 00:09:17.994 "is_configured": true, 00:09:17.994 "data_offset": 2048, 00:09:17.994 "data_size": 63488 00:09:17.994 }, 00:09:17.994 { 00:09:17.994 "name": "BaseBdev2", 00:09:17.994 "uuid": "117d2f42-436b-5386-a599-95dd556c0ba1", 00:09:17.994 "is_configured": true, 00:09:17.994 "data_offset": 2048, 00:09:17.994 "data_size": 63488 00:09:17.994 } 00:09:17.994 ] 00:09:17.994 }' 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.994 20:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.561 20:06:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:18.561 20:06:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:18.561 [2024-10-17 20:06:04.149490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.497 "name": "raid_bdev1", 00:09:19.497 "uuid": "50883a37-e350-4a83-bcbe-1ab5a18efa36", 00:09:19.497 "strip_size_kb": 64, 00:09:19.497 "state": "online", 00:09:19.497 "raid_level": "raid0", 00:09:19.497 "superblock": true, 00:09:19.497 "num_base_bdevs": 2, 00:09:19.497 "num_base_bdevs_discovered": 2, 00:09:19.497 "num_base_bdevs_operational": 2, 00:09:19.497 "base_bdevs_list": [ 00:09:19.497 { 00:09:19.497 "name": "BaseBdev1", 00:09:19.497 "uuid": "ef0d4f51-161d-512e-ae56-1b1101261a1e", 00:09:19.497 "is_configured": true, 00:09:19.497 "data_offset": 2048, 00:09:19.497 "data_size": 63488 00:09:19.497 }, 00:09:19.497 { 00:09:19.497 "name": "BaseBdev2", 00:09:19.497 "uuid": "117d2f42-436b-5386-a599-95dd556c0ba1", 00:09:19.497 "is_configured": true, 00:09:19.497 "data_offset": 2048, 00:09:19.497 "data_size": 63488 00:09:19.497 } 00:09:19.497 ] 00:09:19.497 }' 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.497 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.065 [2024-10-17 20:06:05.580019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.065 [2024-10-17 20:06:05.580214] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.065 [2024-10-17 20:06:05.583787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.065 [2024-10-17 20:06:05.584018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.065 { 00:09:20.065 "results": [ 00:09:20.065 { 00:09:20.065 "job": "raid_bdev1", 00:09:20.065 "core_mask": "0x1", 00:09:20.065 "workload": "randrw", 00:09:20.065 "percentage": 50, 00:09:20.065 "status": "finished", 00:09:20.065 "queue_depth": 1, 00:09:20.065 "io_size": 131072, 00:09:20.065 "runtime": 1.428299, 00:09:20.065 "iops": 11671.925836256974, 00:09:20.065 "mibps": 1458.9907295321218, 00:09:20.065 "io_failed": 1, 00:09:20.065 "io_timeout": 0, 00:09:20.065 "avg_latency_us": 119.6387087768278, 00:09:20.065 "min_latency_us": 35.84, 00:09:20.065 "max_latency_us": 1861.8181818181818 00:09:20.065 } 00:09:20.065 ], 00:09:20.065 "core_count": 1 00:09:20.065 } 00:09:20.065 [2024-10-17 20:06:05.584114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.065 [2024-10-17 20:06:05.584144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61421 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61421 ']' 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61421 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61421 00:09:20.065 killing process with pid 61421 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61421' 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61421 00:09:20.065 [2024-10-17 20:06:05.622340] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.065 20:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61421 00:09:20.324 [2024-10-17 20:06:05.749361] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.261 20:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BEhnqYFxi8 00:09:21.261 20:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:21.261 20:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:21.261 20:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:21.261 20:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:21.261 20:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.261 20:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:21.261 20:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:21.261 00:09:21.261 real 0m4.527s 00:09:21.261 user 0m5.706s 00:09:21.261 sys 0m0.562s 00:09:21.261 20:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.261 ************************************ 00:09:21.261 END TEST raid_write_error_test 00:09:21.261 ************************************ 00:09:21.261 20:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 20:06:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:21.261 20:06:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:21.261 20:06:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:21.261 20:06:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.261 20:06:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 ************************************ 00:09:21.261 START TEST raid_state_function_test 00:09:21.261 ************************************ 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:21.261 Process raid pid: 61559 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61559 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61559' 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61559 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61559 ']' 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.261 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.520 [2024-10-17 20:06:06.964366] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:09:21.520 [2024-10-17 20:06:06.964783] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.520 [2024-10-17 20:06:07.140788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.781 [2024-10-17 20:06:07.282668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.040 [2024-10-17 20:06:07.501274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.040 [2024-10-17 20:06:07.501328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.302 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.302 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:22.302 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:22.302 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.302 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.564 [2024-10-17 20:06:07.954522] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.564 [2024-10-17 20:06:07.954608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.564 [2024-10-17 20:06:07.954627] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.564 [2024-10-17 20:06:07.954644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.565 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.565 "name": "Existed_Raid", 00:09:22.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.565 "strip_size_kb": 64, 00:09:22.565 "state": "configuring", 00:09:22.565 "raid_level": "concat", 00:09:22.565 "superblock": false, 00:09:22.565 "num_base_bdevs": 2, 00:09:22.565 "num_base_bdevs_discovered": 0, 00:09:22.565 "num_base_bdevs_operational": 2, 00:09:22.565 "base_bdevs_list": [ 00:09:22.565 { 00:09:22.565 "name": "BaseBdev1", 00:09:22.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.565 "is_configured": false, 00:09:22.565 "data_offset": 0, 00:09:22.565 "data_size": 0 00:09:22.565 }, 00:09:22.565 { 00:09:22.565 "name": "BaseBdev2", 00:09:22.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.565 "is_configured": false, 00:09:22.565 "data_offset": 0, 00:09:22.565 "data_size": 0 00:09:22.565 } 00:09:22.565 ] 00:09:22.565 }' 00:09:22.565 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.565 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.824 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.824 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.824 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.824 [2024-10-17 20:06:08.454608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.824 [2024-10-17 20:06:08.454652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:22.824 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.824 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:22.824 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.824 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.824 [2024-10-17 20:06:08.462616] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.824 [2024-10-17 20:06:08.462686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.824 [2024-10-17 20:06:08.462702] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.824 [2024-10-17 20:06:08.462721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.824 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.824 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.824 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.824 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.126 [2024-10-17 20:06:08.507795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.126 BaseBdev1 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.126 [ 00:09:23.126 { 00:09:23.126 "name": "BaseBdev1", 00:09:23.126 "aliases": [ 00:09:23.126 "f95e718c-3d63-47f0-a340-01251ae9a376" 00:09:23.126 ], 00:09:23.126 "product_name": "Malloc disk", 00:09:23.126 "block_size": 512, 00:09:23.126 "num_blocks": 65536, 00:09:23.126 "uuid": "f95e718c-3d63-47f0-a340-01251ae9a376", 00:09:23.126 "assigned_rate_limits": { 00:09:23.126 "rw_ios_per_sec": 0, 00:09:23.126 "rw_mbytes_per_sec": 0, 00:09:23.126 "r_mbytes_per_sec": 0, 00:09:23.126 "w_mbytes_per_sec": 0 00:09:23.126 }, 00:09:23.126 "claimed": true, 00:09:23.126 "claim_type": "exclusive_write", 00:09:23.126 "zoned": false, 00:09:23.126 "supported_io_types": { 00:09:23.126 "read": true, 00:09:23.126 "write": true, 00:09:23.126 "unmap": true, 00:09:23.126 "flush": true, 00:09:23.126 "reset": true, 00:09:23.126 "nvme_admin": false, 00:09:23.126 "nvme_io": false, 00:09:23.126 "nvme_io_md": false, 00:09:23.126 "write_zeroes": true, 00:09:23.126 "zcopy": true, 00:09:23.126 "get_zone_info": false, 00:09:23.126 "zone_management": false, 00:09:23.126 "zone_append": false, 00:09:23.126 "compare": false, 00:09:23.126 "compare_and_write": false, 00:09:23.126 "abort": true, 00:09:23.126 "seek_hole": false, 00:09:23.126 "seek_data": false, 00:09:23.126 "copy": true, 00:09:23.126 "nvme_iov_md": false 00:09:23.126 }, 00:09:23.126 "memory_domains": [ 00:09:23.126 { 00:09:23.126 "dma_device_id": "system", 00:09:23.126 "dma_device_type": 1 00:09:23.126 }, 00:09:23.126 { 00:09:23.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.126 "dma_device_type": 2 00:09:23.126 } 00:09:23.126 ], 00:09:23.126 "driver_specific": {} 00:09:23.126 } 00:09:23.126 ] 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.126 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.127 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.127 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.127 "name": "Existed_Raid", 00:09:23.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.127 "strip_size_kb": 64, 00:09:23.127 "state": "configuring", 00:09:23.127 "raid_level": "concat", 00:09:23.127 "superblock": false, 00:09:23.127 "num_base_bdevs": 2, 00:09:23.127 "num_base_bdevs_discovered": 1, 00:09:23.127 "num_base_bdevs_operational": 2, 00:09:23.127 "base_bdevs_list": [ 00:09:23.127 { 00:09:23.127 "name": "BaseBdev1", 00:09:23.127 "uuid": "f95e718c-3d63-47f0-a340-01251ae9a376", 00:09:23.127 "is_configured": true, 00:09:23.127 "data_offset": 0, 00:09:23.127 "data_size": 65536 00:09:23.127 }, 00:09:23.127 { 00:09:23.127 "name": "BaseBdev2", 00:09:23.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.127 "is_configured": false, 00:09:23.127 "data_offset": 0, 00:09:23.127 "data_size": 0 00:09:23.127 } 00:09:23.127 ] 00:09:23.127 }' 00:09:23.127 20:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.127 20:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.426 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.426 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.426 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.426 [2024-10-17 20:06:09.040035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.427 [2024-10-17 20:06:09.040112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.427 [2024-10-17 20:06:09.048115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.427 [2024-10-17 20:06:09.050914] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.427 [2024-10-17 20:06:09.050983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.427 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.686 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.686 "name": "Existed_Raid", 00:09:23.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.686 "strip_size_kb": 64, 00:09:23.686 "state": "configuring", 00:09:23.686 "raid_level": "concat", 00:09:23.686 "superblock": false, 00:09:23.686 "num_base_bdevs": 2, 00:09:23.686 "num_base_bdevs_discovered": 1, 00:09:23.686 "num_base_bdevs_operational": 2, 00:09:23.686 "base_bdevs_list": [ 00:09:23.686 { 00:09:23.686 "name": "BaseBdev1", 00:09:23.686 "uuid": "f95e718c-3d63-47f0-a340-01251ae9a376", 00:09:23.686 "is_configured": true, 00:09:23.686 "data_offset": 0, 00:09:23.686 "data_size": 65536 00:09:23.686 }, 00:09:23.686 { 00:09:23.686 "name": "BaseBdev2", 00:09:23.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.686 "is_configured": false, 00:09:23.686 "data_offset": 0, 00:09:23.686 "data_size": 0 00:09:23.686 } 00:09:23.686 ] 00:09:23.686 }' 00:09:23.686 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.686 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.945 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.945 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.945 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.204 [2024-10-17 20:06:09.601936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.204 [2024-10-17 20:06:09.602277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:24.204 [2024-10-17 20:06:09.602303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:24.204 [2024-10-17 20:06:09.602669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:24.204 [2024-10-17 20:06:09.602891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:24.204 [2024-10-17 20:06:09.602914] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:24.204 [2024-10-17 20:06:09.603235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.204 BaseBdev2 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.204 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.204 [ 00:09:24.204 { 00:09:24.204 "name": "BaseBdev2", 00:09:24.204 "aliases": [ 00:09:24.204 "4ec67a38-86e1-4b34-a007-13517f6169e0" 00:09:24.204 ], 00:09:24.204 "product_name": "Malloc disk", 00:09:24.204 "block_size": 512, 00:09:24.204 "num_blocks": 65536, 00:09:24.205 "uuid": "4ec67a38-86e1-4b34-a007-13517f6169e0", 00:09:24.205 "assigned_rate_limits": { 00:09:24.205 "rw_ios_per_sec": 0, 00:09:24.205 "rw_mbytes_per_sec": 0, 00:09:24.205 "r_mbytes_per_sec": 0, 00:09:24.205 "w_mbytes_per_sec": 0 00:09:24.205 }, 00:09:24.205 "claimed": true, 00:09:24.205 "claim_type": "exclusive_write", 00:09:24.205 "zoned": false, 00:09:24.205 "supported_io_types": { 00:09:24.205 "read": true, 00:09:24.205 "write": true, 00:09:24.205 "unmap": true, 00:09:24.205 "flush": true, 00:09:24.205 "reset": true, 00:09:24.205 "nvme_admin": false, 00:09:24.205 "nvme_io": false, 00:09:24.205 "nvme_io_md": false, 00:09:24.205 "write_zeroes": true, 00:09:24.205 "zcopy": true, 00:09:24.205 "get_zone_info": false, 00:09:24.205 "zone_management": false, 00:09:24.205 "zone_append": false, 00:09:24.205 "compare": false, 00:09:24.205 "compare_and_write": false, 00:09:24.205 "abort": true, 00:09:24.205 "seek_hole": false, 00:09:24.205 "seek_data": false, 00:09:24.205 "copy": true, 00:09:24.205 "nvme_iov_md": false 00:09:24.205 }, 00:09:24.205 "memory_domains": [ 00:09:24.205 { 00:09:24.205 "dma_device_id": "system", 00:09:24.205 "dma_device_type": 1 00:09:24.205 }, 00:09:24.205 { 00:09:24.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.205 "dma_device_type": 2 00:09:24.205 } 00:09:24.205 ], 00:09:24.205 "driver_specific": {} 00:09:24.205 } 00:09:24.205 ] 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.205 "name": "Existed_Raid", 00:09:24.205 "uuid": "20447fd2-2114-49b8-be8b-dacb96c996b0", 00:09:24.205 "strip_size_kb": 64, 00:09:24.205 "state": "online", 00:09:24.205 "raid_level": "concat", 00:09:24.205 "superblock": false, 00:09:24.205 "num_base_bdevs": 2, 00:09:24.205 "num_base_bdevs_discovered": 2, 00:09:24.205 "num_base_bdevs_operational": 2, 00:09:24.205 "base_bdevs_list": [ 00:09:24.205 { 00:09:24.205 "name": "BaseBdev1", 00:09:24.205 "uuid": "f95e718c-3d63-47f0-a340-01251ae9a376", 00:09:24.205 "is_configured": true, 00:09:24.205 "data_offset": 0, 00:09:24.205 "data_size": 65536 00:09:24.205 }, 00:09:24.205 { 00:09:24.205 "name": "BaseBdev2", 00:09:24.205 "uuid": "4ec67a38-86e1-4b34-a007-13517f6169e0", 00:09:24.205 "is_configured": true, 00:09:24.205 "data_offset": 0, 00:09:24.205 "data_size": 65536 00:09:24.205 } 00:09:24.205 ] 00:09:24.205 }' 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.205 20:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.772 [2024-10-17 20:06:10.138519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.772 "name": "Existed_Raid", 00:09:24.772 "aliases": [ 00:09:24.772 "20447fd2-2114-49b8-be8b-dacb96c996b0" 00:09:24.772 ], 00:09:24.772 "product_name": "Raid Volume", 00:09:24.772 "block_size": 512, 00:09:24.772 "num_blocks": 131072, 00:09:24.772 "uuid": "20447fd2-2114-49b8-be8b-dacb96c996b0", 00:09:24.772 "assigned_rate_limits": { 00:09:24.772 "rw_ios_per_sec": 0, 00:09:24.772 "rw_mbytes_per_sec": 0, 00:09:24.772 "r_mbytes_per_sec": 0, 00:09:24.772 "w_mbytes_per_sec": 0 00:09:24.772 }, 00:09:24.772 "claimed": false, 00:09:24.772 "zoned": false, 00:09:24.772 "supported_io_types": { 00:09:24.772 "read": true, 00:09:24.772 "write": true, 00:09:24.772 "unmap": true, 00:09:24.772 "flush": true, 00:09:24.772 "reset": true, 00:09:24.772 "nvme_admin": false, 00:09:24.772 "nvme_io": false, 00:09:24.772 "nvme_io_md": false, 00:09:24.772 "write_zeroes": true, 00:09:24.772 "zcopy": false, 00:09:24.772 "get_zone_info": false, 00:09:24.772 "zone_management": false, 00:09:24.772 "zone_append": false, 00:09:24.772 "compare": false, 00:09:24.772 "compare_and_write": false, 00:09:24.772 "abort": false, 00:09:24.772 "seek_hole": false, 00:09:24.772 "seek_data": false, 00:09:24.772 "copy": false, 00:09:24.772 "nvme_iov_md": false 00:09:24.772 }, 00:09:24.772 "memory_domains": [ 00:09:24.772 { 00:09:24.772 "dma_device_id": "system", 00:09:24.772 "dma_device_type": 1 00:09:24.772 }, 00:09:24.772 { 00:09:24.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.772 "dma_device_type": 2 00:09:24.772 }, 00:09:24.772 { 00:09:24.772 "dma_device_id": "system", 00:09:24.772 "dma_device_type": 1 00:09:24.772 }, 00:09:24.772 { 00:09:24.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.772 "dma_device_type": 2 00:09:24.772 } 00:09:24.772 ], 00:09:24.772 "driver_specific": { 00:09:24.772 "raid": { 00:09:24.772 "uuid": "20447fd2-2114-49b8-be8b-dacb96c996b0", 00:09:24.772 "strip_size_kb": 64, 00:09:24.772 "state": "online", 00:09:24.772 "raid_level": "concat", 00:09:24.772 "superblock": false, 00:09:24.772 "num_base_bdevs": 2, 00:09:24.772 "num_base_bdevs_discovered": 2, 00:09:24.772 "num_base_bdevs_operational": 2, 00:09:24.772 "base_bdevs_list": [ 00:09:24.772 { 00:09:24.772 "name": "BaseBdev1", 00:09:24.772 "uuid": "f95e718c-3d63-47f0-a340-01251ae9a376", 00:09:24.772 "is_configured": true, 00:09:24.772 "data_offset": 0, 00:09:24.772 "data_size": 65536 00:09:24.772 }, 00:09:24.772 { 00:09:24.772 "name": "BaseBdev2", 00:09:24.772 "uuid": "4ec67a38-86e1-4b34-a007-13517f6169e0", 00:09:24.772 "is_configured": true, 00:09:24.772 "data_offset": 0, 00:09:24.772 "data_size": 65536 00:09:24.772 } 00:09:24.772 ] 00:09:24.772 } 00:09:24.772 } 00:09:24.772 }' 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:24.772 BaseBdev2' 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.772 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.772 [2024-10-17 20:06:10.398337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.772 [2024-10-17 20:06:10.398396] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.772 [2024-10-17 20:06:10.398458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.031 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.031 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:25.031 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:25.031 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.031 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.032 "name": "Existed_Raid", 00:09:25.032 "uuid": "20447fd2-2114-49b8-be8b-dacb96c996b0", 00:09:25.032 "strip_size_kb": 64, 00:09:25.032 "state": "offline", 00:09:25.032 "raid_level": "concat", 00:09:25.032 "superblock": false, 00:09:25.032 "num_base_bdevs": 2, 00:09:25.032 "num_base_bdevs_discovered": 1, 00:09:25.032 "num_base_bdevs_operational": 1, 00:09:25.032 "base_bdevs_list": [ 00:09:25.032 { 00:09:25.032 "name": null, 00:09:25.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.032 "is_configured": false, 00:09:25.032 "data_offset": 0, 00:09:25.032 "data_size": 65536 00:09:25.032 }, 00:09:25.032 { 00:09:25.032 "name": "BaseBdev2", 00:09:25.032 "uuid": "4ec67a38-86e1-4b34-a007-13517f6169e0", 00:09:25.032 "is_configured": true, 00:09:25.032 "data_offset": 0, 00:09:25.032 "data_size": 65536 00:09:25.032 } 00:09:25.032 ] 00:09:25.032 }' 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.032 20:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.597 [2024-10-17 20:06:11.081210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.597 [2024-10-17 20:06:11.081419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61559 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61559 ']' 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61559 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61559 00:09:25.597 killing process with pid 61559 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61559' 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61559 00:09:25.597 [2024-10-17 20:06:11.244780] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.597 20:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61559 00:09:25.855 [2024-10-17 20:06:11.259359] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:26.821 00:09:26.821 real 0m5.510s 00:09:26.821 user 0m8.294s 00:09:26.821 sys 0m0.771s 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.821 ************************************ 00:09:26.821 END TEST raid_state_function_test 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.821 ************************************ 00:09:26.821 20:06:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:26.821 20:06:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:26.821 20:06:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.821 20:06:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.821 ************************************ 00:09:26.821 START TEST raid_state_function_test_sb 00:09:26.821 ************************************ 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:26.821 Process raid pid: 61818 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61818 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61818' 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61818 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61818 ']' 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.821 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.092 [2024-10-17 20:06:12.530515] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:09:27.092 [2024-10-17 20:06:12.530698] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.092 [2024-10-17 20:06:12.708884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.350 [2024-10-17 20:06:12.848573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.607 [2024-10-17 20:06:13.056929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.607 [2024-10-17 20:06:13.056983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.173 [2024-10-17 20:06:13.540637] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.173 [2024-10-17 20:06:13.540880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.173 [2024-10-17 20:06:13.540914] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.173 [2024-10-17 20:06:13.540932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.173 "name": "Existed_Raid", 00:09:28.173 "uuid": "8804094e-219e-4369-8be4-563dfe931363", 00:09:28.173 "strip_size_kb": 64, 00:09:28.173 "state": "configuring", 00:09:28.173 "raid_level": "concat", 00:09:28.173 "superblock": true, 00:09:28.173 "num_base_bdevs": 2, 00:09:28.173 "num_base_bdevs_discovered": 0, 00:09:28.173 "num_base_bdevs_operational": 2, 00:09:28.173 "base_bdevs_list": [ 00:09:28.173 { 00:09:28.173 "name": "BaseBdev1", 00:09:28.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.173 "is_configured": false, 00:09:28.173 "data_offset": 0, 00:09:28.173 "data_size": 0 00:09:28.173 }, 00:09:28.173 { 00:09:28.173 "name": "BaseBdev2", 00:09:28.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.173 "is_configured": false, 00:09:28.173 "data_offset": 0, 00:09:28.173 "data_size": 0 00:09:28.173 } 00:09:28.173 ] 00:09:28.173 }' 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.173 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.431 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.431 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.431 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.431 [2024-10-17 20:06:14.028739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.431 [2024-10-17 20:06:14.028782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:28.431 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.431 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:28.431 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.431 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.431 [2024-10-17 20:06:14.040752] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.431 [2024-10-17 20:06:14.040968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.431 [2024-10-17 20:06:14.041138] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.431 [2024-10-17 20:06:14.041278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.431 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.431 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.431 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.431 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.689 [2024-10-17 20:06:14.085278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.689 BaseBdev1 00:09:28.689 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.689 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:28.689 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:28.689 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.689 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:28.689 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.689 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.689 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.689 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.689 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.689 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.690 [ 00:09:28.690 { 00:09:28.690 "name": "BaseBdev1", 00:09:28.690 "aliases": [ 00:09:28.690 "1616d19a-bfd0-4af1-9b41-03c2fa9c4fcb" 00:09:28.690 ], 00:09:28.690 "product_name": "Malloc disk", 00:09:28.690 "block_size": 512, 00:09:28.690 "num_blocks": 65536, 00:09:28.690 "uuid": "1616d19a-bfd0-4af1-9b41-03c2fa9c4fcb", 00:09:28.690 "assigned_rate_limits": { 00:09:28.690 "rw_ios_per_sec": 0, 00:09:28.690 "rw_mbytes_per_sec": 0, 00:09:28.690 "r_mbytes_per_sec": 0, 00:09:28.690 "w_mbytes_per_sec": 0 00:09:28.690 }, 00:09:28.690 "claimed": true, 00:09:28.690 "claim_type": "exclusive_write", 00:09:28.690 "zoned": false, 00:09:28.690 "supported_io_types": { 00:09:28.690 "read": true, 00:09:28.690 "write": true, 00:09:28.690 "unmap": true, 00:09:28.690 "flush": true, 00:09:28.690 "reset": true, 00:09:28.690 "nvme_admin": false, 00:09:28.690 "nvme_io": false, 00:09:28.690 "nvme_io_md": false, 00:09:28.690 "write_zeroes": true, 00:09:28.690 "zcopy": true, 00:09:28.690 "get_zone_info": false, 00:09:28.690 "zone_management": false, 00:09:28.690 "zone_append": false, 00:09:28.690 "compare": false, 00:09:28.690 "compare_and_write": false, 00:09:28.690 "abort": true, 00:09:28.690 "seek_hole": false, 00:09:28.690 "seek_data": false, 00:09:28.690 "copy": true, 00:09:28.690 "nvme_iov_md": false 00:09:28.690 }, 00:09:28.690 "memory_domains": [ 00:09:28.690 { 00:09:28.690 "dma_device_id": "system", 00:09:28.690 "dma_device_type": 1 00:09:28.690 }, 00:09:28.690 { 00:09:28.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.690 "dma_device_type": 2 00:09:28.690 } 00:09:28.690 ], 00:09:28.690 "driver_specific": {} 00:09:28.690 } 00:09:28.690 ] 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.690 "name": "Existed_Raid", 00:09:28.690 "uuid": "9182f93a-5243-49e0-9543-a8def6e4e42a", 00:09:28.690 "strip_size_kb": 64, 00:09:28.690 "state": "configuring", 00:09:28.690 "raid_level": "concat", 00:09:28.690 "superblock": true, 00:09:28.690 "num_base_bdevs": 2, 00:09:28.690 "num_base_bdevs_discovered": 1, 00:09:28.690 "num_base_bdevs_operational": 2, 00:09:28.690 "base_bdevs_list": [ 00:09:28.690 { 00:09:28.690 "name": "BaseBdev1", 00:09:28.690 "uuid": "1616d19a-bfd0-4af1-9b41-03c2fa9c4fcb", 00:09:28.690 "is_configured": true, 00:09:28.690 "data_offset": 2048, 00:09:28.690 "data_size": 63488 00:09:28.690 }, 00:09:28.690 { 00:09:28.690 "name": "BaseBdev2", 00:09:28.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.690 "is_configured": false, 00:09:28.690 "data_offset": 0, 00:09:28.690 "data_size": 0 00:09:28.690 } 00:09:28.690 ] 00:09:28.690 }' 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.690 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.949 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.949 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.949 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.208 [2024-10-17 20:06:14.601475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.208 [2024-10-17 20:06:14.601565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.208 [2024-10-17 20:06:14.609528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.208 [2024-10-17 20:06:14.612133] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.208 [2024-10-17 20:06:14.612198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.208 "name": "Existed_Raid", 00:09:29.208 "uuid": "8166106b-1029-4472-a043-04e961f9b951", 00:09:29.208 "strip_size_kb": 64, 00:09:29.208 "state": "configuring", 00:09:29.208 "raid_level": "concat", 00:09:29.208 "superblock": true, 00:09:29.208 "num_base_bdevs": 2, 00:09:29.208 "num_base_bdevs_discovered": 1, 00:09:29.208 "num_base_bdevs_operational": 2, 00:09:29.208 "base_bdevs_list": [ 00:09:29.208 { 00:09:29.208 "name": "BaseBdev1", 00:09:29.208 "uuid": "1616d19a-bfd0-4af1-9b41-03c2fa9c4fcb", 00:09:29.208 "is_configured": true, 00:09:29.208 "data_offset": 2048, 00:09:29.208 "data_size": 63488 00:09:29.208 }, 00:09:29.208 { 00:09:29.208 "name": "BaseBdev2", 00:09:29.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.208 "is_configured": false, 00:09:29.208 "data_offset": 0, 00:09:29.208 "data_size": 0 00:09:29.208 } 00:09:29.208 ] 00:09:29.208 }' 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.208 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.776 [2024-10-17 20:06:15.166745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.776 [2024-10-17 20:06:15.167094] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:29.776 [2024-10-17 20:06:15.167114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:29.776 [2024-10-17 20:06:15.167459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:29.776 [2024-10-17 20:06:15.167652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:29.776 [2024-10-17 20:06:15.167680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:29.776 [2024-10-17 20:06:15.167843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.776 BaseBdev2 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.776 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.777 [ 00:09:29.777 { 00:09:29.777 "name": "BaseBdev2", 00:09:29.777 "aliases": [ 00:09:29.777 "53818113-6533-4a4a-8ebe-432326ecd58a" 00:09:29.777 ], 00:09:29.777 "product_name": "Malloc disk", 00:09:29.777 "block_size": 512, 00:09:29.777 "num_blocks": 65536, 00:09:29.777 "uuid": "53818113-6533-4a4a-8ebe-432326ecd58a", 00:09:29.777 "assigned_rate_limits": { 00:09:29.777 "rw_ios_per_sec": 0, 00:09:29.777 "rw_mbytes_per_sec": 0, 00:09:29.777 "r_mbytes_per_sec": 0, 00:09:29.777 "w_mbytes_per_sec": 0 00:09:29.777 }, 00:09:29.777 "claimed": true, 00:09:29.777 "claim_type": "exclusive_write", 00:09:29.777 "zoned": false, 00:09:29.777 "supported_io_types": { 00:09:29.777 "read": true, 00:09:29.777 "write": true, 00:09:29.777 "unmap": true, 00:09:29.777 "flush": true, 00:09:29.777 "reset": true, 00:09:29.777 "nvme_admin": false, 00:09:29.777 "nvme_io": false, 00:09:29.777 "nvme_io_md": false, 00:09:29.777 "write_zeroes": true, 00:09:29.777 "zcopy": true, 00:09:29.777 "get_zone_info": false, 00:09:29.777 "zone_management": false, 00:09:29.777 "zone_append": false, 00:09:29.777 "compare": false, 00:09:29.777 "compare_and_write": false, 00:09:29.777 "abort": true, 00:09:29.777 "seek_hole": false, 00:09:29.777 "seek_data": false, 00:09:29.777 "copy": true, 00:09:29.777 "nvme_iov_md": false 00:09:29.777 }, 00:09:29.777 "memory_domains": [ 00:09:29.777 { 00:09:29.777 "dma_device_id": "system", 00:09:29.777 "dma_device_type": 1 00:09:29.777 }, 00:09:29.777 { 00:09:29.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.777 "dma_device_type": 2 00:09:29.777 } 00:09:29.777 ], 00:09:29.777 "driver_specific": {} 00:09:29.777 } 00:09:29.777 ] 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.777 "name": "Existed_Raid", 00:09:29.777 "uuid": "8166106b-1029-4472-a043-04e961f9b951", 00:09:29.777 "strip_size_kb": 64, 00:09:29.777 "state": "online", 00:09:29.777 "raid_level": "concat", 00:09:29.777 "superblock": true, 00:09:29.777 "num_base_bdevs": 2, 00:09:29.777 "num_base_bdevs_discovered": 2, 00:09:29.777 "num_base_bdevs_operational": 2, 00:09:29.777 "base_bdevs_list": [ 00:09:29.777 { 00:09:29.777 "name": "BaseBdev1", 00:09:29.777 "uuid": "1616d19a-bfd0-4af1-9b41-03c2fa9c4fcb", 00:09:29.777 "is_configured": true, 00:09:29.777 "data_offset": 2048, 00:09:29.777 "data_size": 63488 00:09:29.777 }, 00:09:29.777 { 00:09:29.777 "name": "BaseBdev2", 00:09:29.777 "uuid": "53818113-6533-4a4a-8ebe-432326ecd58a", 00:09:29.777 "is_configured": true, 00:09:29.777 "data_offset": 2048, 00:09:29.777 "data_size": 63488 00:09:29.777 } 00:09:29.777 ] 00:09:29.777 }' 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.777 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.343 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.344 [2024-10-17 20:06:15.703349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.344 "name": "Existed_Raid", 00:09:30.344 "aliases": [ 00:09:30.344 "8166106b-1029-4472-a043-04e961f9b951" 00:09:30.344 ], 00:09:30.344 "product_name": "Raid Volume", 00:09:30.344 "block_size": 512, 00:09:30.344 "num_blocks": 126976, 00:09:30.344 "uuid": "8166106b-1029-4472-a043-04e961f9b951", 00:09:30.344 "assigned_rate_limits": { 00:09:30.344 "rw_ios_per_sec": 0, 00:09:30.344 "rw_mbytes_per_sec": 0, 00:09:30.344 "r_mbytes_per_sec": 0, 00:09:30.344 "w_mbytes_per_sec": 0 00:09:30.344 }, 00:09:30.344 "claimed": false, 00:09:30.344 "zoned": false, 00:09:30.344 "supported_io_types": { 00:09:30.344 "read": true, 00:09:30.344 "write": true, 00:09:30.344 "unmap": true, 00:09:30.344 "flush": true, 00:09:30.344 "reset": true, 00:09:30.344 "nvme_admin": false, 00:09:30.344 "nvme_io": false, 00:09:30.344 "nvme_io_md": false, 00:09:30.344 "write_zeroes": true, 00:09:30.344 "zcopy": false, 00:09:30.344 "get_zone_info": false, 00:09:30.344 "zone_management": false, 00:09:30.344 "zone_append": false, 00:09:30.344 "compare": false, 00:09:30.344 "compare_and_write": false, 00:09:30.344 "abort": false, 00:09:30.344 "seek_hole": false, 00:09:30.344 "seek_data": false, 00:09:30.344 "copy": false, 00:09:30.344 "nvme_iov_md": false 00:09:30.344 }, 00:09:30.344 "memory_domains": [ 00:09:30.344 { 00:09:30.344 "dma_device_id": "system", 00:09:30.344 "dma_device_type": 1 00:09:30.344 }, 00:09:30.344 { 00:09:30.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.344 "dma_device_type": 2 00:09:30.344 }, 00:09:30.344 { 00:09:30.344 "dma_device_id": "system", 00:09:30.344 "dma_device_type": 1 00:09:30.344 }, 00:09:30.344 { 00:09:30.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.344 "dma_device_type": 2 00:09:30.344 } 00:09:30.344 ], 00:09:30.344 "driver_specific": { 00:09:30.344 "raid": { 00:09:30.344 "uuid": "8166106b-1029-4472-a043-04e961f9b951", 00:09:30.344 "strip_size_kb": 64, 00:09:30.344 "state": "online", 00:09:30.344 "raid_level": "concat", 00:09:30.344 "superblock": true, 00:09:30.344 "num_base_bdevs": 2, 00:09:30.344 "num_base_bdevs_discovered": 2, 00:09:30.344 "num_base_bdevs_operational": 2, 00:09:30.344 "base_bdevs_list": [ 00:09:30.344 { 00:09:30.344 "name": "BaseBdev1", 00:09:30.344 "uuid": "1616d19a-bfd0-4af1-9b41-03c2fa9c4fcb", 00:09:30.344 "is_configured": true, 00:09:30.344 "data_offset": 2048, 00:09:30.344 "data_size": 63488 00:09:30.344 }, 00:09:30.344 { 00:09:30.344 "name": "BaseBdev2", 00:09:30.344 "uuid": "53818113-6533-4a4a-8ebe-432326ecd58a", 00:09:30.344 "is_configured": true, 00:09:30.344 "data_offset": 2048, 00:09:30.344 "data_size": 63488 00:09:30.344 } 00:09:30.344 ] 00:09:30.344 } 00:09:30.344 } 00:09:30.344 }' 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:30.344 BaseBdev2' 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.344 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.344 [2024-10-17 20:06:15.951105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:30.344 [2024-10-17 20:06:15.951151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.344 [2024-10-17 20:06:15.951219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.608 "name": "Existed_Raid", 00:09:30.608 "uuid": "8166106b-1029-4472-a043-04e961f9b951", 00:09:30.608 "strip_size_kb": 64, 00:09:30.608 "state": "offline", 00:09:30.608 "raid_level": "concat", 00:09:30.608 "superblock": true, 00:09:30.608 "num_base_bdevs": 2, 00:09:30.608 "num_base_bdevs_discovered": 1, 00:09:30.608 "num_base_bdevs_operational": 1, 00:09:30.608 "base_bdevs_list": [ 00:09:30.608 { 00:09:30.608 "name": null, 00:09:30.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.608 "is_configured": false, 00:09:30.608 "data_offset": 0, 00:09:30.608 "data_size": 63488 00:09:30.608 }, 00:09:30.608 { 00:09:30.608 "name": "BaseBdev2", 00:09:30.608 "uuid": "53818113-6533-4a4a-8ebe-432326ecd58a", 00:09:30.608 "is_configured": true, 00:09:30.608 "data_offset": 2048, 00:09:30.608 "data_size": 63488 00:09:30.608 } 00:09:30.608 ] 00:09:30.608 }' 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.608 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.175 [2024-10-17 20:06:16.611684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:31.175 [2024-10-17 20:06:16.611760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61818 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61818 ']' 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61818 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61818 00:09:31.175 killing process with pid 61818 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61818' 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61818 00:09:31.175 [2024-10-17 20:06:16.789742] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.175 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61818 00:09:31.175 [2024-10-17 20:06:16.804509] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.550 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:32.550 00:09:32.550 real 0m5.466s 00:09:32.550 user 0m8.233s 00:09:32.551 sys 0m0.757s 00:09:32.551 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.551 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.551 ************************************ 00:09:32.551 END TEST raid_state_function_test_sb 00:09:32.551 ************************************ 00:09:32.551 20:06:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:32.551 20:06:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:32.551 20:06:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.551 20:06:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.551 ************************************ 00:09:32.551 START TEST raid_superblock_test 00:09:32.551 ************************************ 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62075 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62075 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62075 ']' 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.551 20:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.551 [2024-10-17 20:06:18.043049] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:09:32.551 [2024-10-17 20:06:18.043272] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62075 ] 00:09:32.810 [2024-10-17 20:06:18.219394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.810 [2024-10-17 20:06:18.350789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.069 [2024-10-17 20:06:18.543126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.069 [2024-10-17 20:06:18.543242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.636 malloc1 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.636 [2024-10-17 20:06:19.135687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:33.636 [2024-10-17 20:06:19.135786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.636 [2024-10-17 20:06:19.135821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:33.636 [2024-10-17 20:06:19.135836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.636 [2024-10-17 20:06:19.138681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.636 [2024-10-17 20:06:19.138727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:33.636 pt1 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.636 malloc2 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.636 [2024-10-17 20:06:19.193949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:33.636 [2024-10-17 20:06:19.194046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.636 [2024-10-17 20:06:19.194079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:33.636 [2024-10-17 20:06:19.194094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.636 [2024-10-17 20:06:19.196892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.636 [2024-10-17 20:06:19.196953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:33.636 pt2 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.636 [2024-10-17 20:06:19.206040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:33.636 [2024-10-17 20:06:19.208607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.636 [2024-10-17 20:06:19.208798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:33.636 [2024-10-17 20:06:19.208816] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:33.636 [2024-10-17 20:06:19.209167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:33.636 [2024-10-17 20:06:19.209375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:33.636 [2024-10-17 20:06:19.209403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:33.636 [2024-10-17 20:06:19.209592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.636 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.637 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.637 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.637 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.637 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.637 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.637 "name": "raid_bdev1", 00:09:33.637 "uuid": "f592abd7-b154-425c-9e56-9066bb512bb8", 00:09:33.637 "strip_size_kb": 64, 00:09:33.637 "state": "online", 00:09:33.637 "raid_level": "concat", 00:09:33.637 "superblock": true, 00:09:33.637 "num_base_bdevs": 2, 00:09:33.637 "num_base_bdevs_discovered": 2, 00:09:33.637 "num_base_bdevs_operational": 2, 00:09:33.637 "base_bdevs_list": [ 00:09:33.637 { 00:09:33.637 "name": "pt1", 00:09:33.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.637 "is_configured": true, 00:09:33.637 "data_offset": 2048, 00:09:33.637 "data_size": 63488 00:09:33.637 }, 00:09:33.637 { 00:09:33.637 "name": "pt2", 00:09:33.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.637 "is_configured": true, 00:09:33.637 "data_offset": 2048, 00:09:33.637 "data_size": 63488 00:09:33.637 } 00:09:33.637 ] 00:09:33.637 }' 00:09:33.637 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.637 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.204 [2024-10-17 20:06:19.726536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.204 "name": "raid_bdev1", 00:09:34.204 "aliases": [ 00:09:34.204 "f592abd7-b154-425c-9e56-9066bb512bb8" 00:09:34.204 ], 00:09:34.204 "product_name": "Raid Volume", 00:09:34.204 "block_size": 512, 00:09:34.204 "num_blocks": 126976, 00:09:34.204 "uuid": "f592abd7-b154-425c-9e56-9066bb512bb8", 00:09:34.204 "assigned_rate_limits": { 00:09:34.204 "rw_ios_per_sec": 0, 00:09:34.204 "rw_mbytes_per_sec": 0, 00:09:34.204 "r_mbytes_per_sec": 0, 00:09:34.204 "w_mbytes_per_sec": 0 00:09:34.204 }, 00:09:34.204 "claimed": false, 00:09:34.204 "zoned": false, 00:09:34.204 "supported_io_types": { 00:09:34.204 "read": true, 00:09:34.204 "write": true, 00:09:34.204 "unmap": true, 00:09:34.204 "flush": true, 00:09:34.204 "reset": true, 00:09:34.204 "nvme_admin": false, 00:09:34.204 "nvme_io": false, 00:09:34.204 "nvme_io_md": false, 00:09:34.204 "write_zeroes": true, 00:09:34.204 "zcopy": false, 00:09:34.204 "get_zone_info": false, 00:09:34.204 "zone_management": false, 00:09:34.204 "zone_append": false, 00:09:34.204 "compare": false, 00:09:34.204 "compare_and_write": false, 00:09:34.204 "abort": false, 00:09:34.204 "seek_hole": false, 00:09:34.204 "seek_data": false, 00:09:34.204 "copy": false, 00:09:34.204 "nvme_iov_md": false 00:09:34.204 }, 00:09:34.204 "memory_domains": [ 00:09:34.204 { 00:09:34.204 "dma_device_id": "system", 00:09:34.204 "dma_device_type": 1 00:09:34.204 }, 00:09:34.204 { 00:09:34.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.204 "dma_device_type": 2 00:09:34.204 }, 00:09:34.204 { 00:09:34.204 "dma_device_id": "system", 00:09:34.204 "dma_device_type": 1 00:09:34.204 }, 00:09:34.204 { 00:09:34.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.204 "dma_device_type": 2 00:09:34.204 } 00:09:34.204 ], 00:09:34.204 "driver_specific": { 00:09:34.204 "raid": { 00:09:34.204 "uuid": "f592abd7-b154-425c-9e56-9066bb512bb8", 00:09:34.204 "strip_size_kb": 64, 00:09:34.204 "state": "online", 00:09:34.204 "raid_level": "concat", 00:09:34.204 "superblock": true, 00:09:34.204 "num_base_bdevs": 2, 00:09:34.204 "num_base_bdevs_discovered": 2, 00:09:34.204 "num_base_bdevs_operational": 2, 00:09:34.204 "base_bdevs_list": [ 00:09:34.204 { 00:09:34.204 "name": "pt1", 00:09:34.204 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.204 "is_configured": true, 00:09:34.204 "data_offset": 2048, 00:09:34.204 "data_size": 63488 00:09:34.204 }, 00:09:34.204 { 00:09:34.204 "name": "pt2", 00:09:34.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.204 "is_configured": true, 00:09:34.204 "data_offset": 2048, 00:09:34.204 "data_size": 63488 00:09:34.204 } 00:09:34.204 ] 00:09:34.204 } 00:09:34.204 } 00:09:34.204 }' 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:34.204 pt2' 00:09:34.204 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.475 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:34.475 [2024-10-17 20:06:19.990558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f592abd7-b154-425c-9e56-9066bb512bb8 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f592abd7-b154-425c-9e56-9066bb512bb8 ']' 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.475 [2024-10-17 20:06:20.042280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.475 [2024-10-17 20:06:20.042315] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.475 [2024-10-17 20:06:20.042438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.475 [2024-10-17 20:06:20.042534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.475 [2024-10-17 20:06:20.042555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:34.475 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.476 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.476 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.476 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.476 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:34.476 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.476 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.476 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.476 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:34.476 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:34.476 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.476 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.756 [2024-10-17 20:06:20.174296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:34.756 [2024-10-17 20:06:20.176945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:34.756 [2024-10-17 20:06:20.177186] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:34.756 [2024-10-17 20:06:20.177420] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:34.756 [2024-10-17 20:06:20.177649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.756 [2024-10-17 20:06:20.177706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:34.756 request: 00:09:34.756 { 00:09:34.756 "name": "raid_bdev1", 00:09:34.756 "raid_level": "concat", 00:09:34.756 "base_bdevs": [ 00:09:34.756 "malloc1", 00:09:34.756 "malloc2" 00:09:34.756 ], 00:09:34.756 "strip_size_kb": 64, 00:09:34.756 "superblock": false, 00:09:34.756 "method": "bdev_raid_create", 00:09:34.756 "req_id": 1 00:09:34.756 } 00:09:34.756 Got JSON-RPC error response 00:09:34.756 response: 00:09:34.756 { 00:09:34.756 "code": -17, 00:09:34.756 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:34.756 } 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.756 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.756 [2024-10-17 20:06:20.238332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:34.756 [2024-10-17 20:06:20.238596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.756 [2024-10-17 20:06:20.238666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:34.756 [2024-10-17 20:06:20.238818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.756 [2024-10-17 20:06:20.241747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.756 [2024-10-17 20:06:20.241810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:34.756 [2024-10-17 20:06:20.241892] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:34.757 [2024-10-17 20:06:20.241962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:34.757 pt1 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.757 "name": "raid_bdev1", 00:09:34.757 "uuid": "f592abd7-b154-425c-9e56-9066bb512bb8", 00:09:34.757 "strip_size_kb": 64, 00:09:34.757 "state": "configuring", 00:09:34.757 "raid_level": "concat", 00:09:34.757 "superblock": true, 00:09:34.757 "num_base_bdevs": 2, 00:09:34.757 "num_base_bdevs_discovered": 1, 00:09:34.757 "num_base_bdevs_operational": 2, 00:09:34.757 "base_bdevs_list": [ 00:09:34.757 { 00:09:34.757 "name": "pt1", 00:09:34.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.757 "is_configured": true, 00:09:34.757 "data_offset": 2048, 00:09:34.757 "data_size": 63488 00:09:34.757 }, 00:09:34.757 { 00:09:34.757 "name": null, 00:09:34.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.757 "is_configured": false, 00:09:34.757 "data_offset": 2048, 00:09:34.757 "data_size": 63488 00:09:34.757 } 00:09:34.757 ] 00:09:34.757 }' 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.757 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.326 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.327 [2024-10-17 20:06:20.742551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:35.327 [2024-10-17 20:06:20.742642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.327 [2024-10-17 20:06:20.742672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:35.327 [2024-10-17 20:06:20.742688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.327 [2024-10-17 20:06:20.743321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.327 [2024-10-17 20:06:20.743360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:35.327 [2024-10-17 20:06:20.743481] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:35.327 [2024-10-17 20:06:20.743516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.327 [2024-10-17 20:06:20.743649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.327 [2024-10-17 20:06:20.743669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:35.327 [2024-10-17 20:06:20.743976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:35.327 [2024-10-17 20:06:20.744224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.327 [2024-10-17 20:06:20.744242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:35.327 [2024-10-17 20:06:20.744409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.327 pt2 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.327 "name": "raid_bdev1", 00:09:35.327 "uuid": "f592abd7-b154-425c-9e56-9066bb512bb8", 00:09:35.327 "strip_size_kb": 64, 00:09:35.327 "state": "online", 00:09:35.327 "raid_level": "concat", 00:09:35.327 "superblock": true, 00:09:35.327 "num_base_bdevs": 2, 00:09:35.327 "num_base_bdevs_discovered": 2, 00:09:35.327 "num_base_bdevs_operational": 2, 00:09:35.327 "base_bdevs_list": [ 00:09:35.327 { 00:09:35.327 "name": "pt1", 00:09:35.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.327 "is_configured": true, 00:09:35.327 "data_offset": 2048, 00:09:35.327 "data_size": 63488 00:09:35.327 }, 00:09:35.327 { 00:09:35.327 "name": "pt2", 00:09:35.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.327 "is_configured": true, 00:09:35.327 "data_offset": 2048, 00:09:35.327 "data_size": 63488 00:09:35.327 } 00:09:35.327 ] 00:09:35.327 }' 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.327 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.895 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:35.895 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:35.895 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.895 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.895 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.895 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.895 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.895 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.895 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.895 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.895 [2024-10-17 20:06:21.267070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.895 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.895 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.895 "name": "raid_bdev1", 00:09:35.895 "aliases": [ 00:09:35.895 "f592abd7-b154-425c-9e56-9066bb512bb8" 00:09:35.895 ], 00:09:35.895 "product_name": "Raid Volume", 00:09:35.895 "block_size": 512, 00:09:35.895 "num_blocks": 126976, 00:09:35.895 "uuid": "f592abd7-b154-425c-9e56-9066bb512bb8", 00:09:35.895 "assigned_rate_limits": { 00:09:35.895 "rw_ios_per_sec": 0, 00:09:35.895 "rw_mbytes_per_sec": 0, 00:09:35.895 "r_mbytes_per_sec": 0, 00:09:35.895 "w_mbytes_per_sec": 0 00:09:35.895 }, 00:09:35.895 "claimed": false, 00:09:35.895 "zoned": false, 00:09:35.895 "supported_io_types": { 00:09:35.895 "read": true, 00:09:35.895 "write": true, 00:09:35.895 "unmap": true, 00:09:35.895 "flush": true, 00:09:35.895 "reset": true, 00:09:35.895 "nvme_admin": false, 00:09:35.895 "nvme_io": false, 00:09:35.895 "nvme_io_md": false, 00:09:35.895 "write_zeroes": true, 00:09:35.895 "zcopy": false, 00:09:35.895 "get_zone_info": false, 00:09:35.895 "zone_management": false, 00:09:35.895 "zone_append": false, 00:09:35.895 "compare": false, 00:09:35.895 "compare_and_write": false, 00:09:35.895 "abort": false, 00:09:35.895 "seek_hole": false, 00:09:35.895 "seek_data": false, 00:09:35.895 "copy": false, 00:09:35.895 "nvme_iov_md": false 00:09:35.895 }, 00:09:35.895 "memory_domains": [ 00:09:35.895 { 00:09:35.895 "dma_device_id": "system", 00:09:35.895 "dma_device_type": 1 00:09:35.895 }, 00:09:35.895 { 00:09:35.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.895 "dma_device_type": 2 00:09:35.895 }, 00:09:35.895 { 00:09:35.895 "dma_device_id": "system", 00:09:35.895 "dma_device_type": 1 00:09:35.895 }, 00:09:35.895 { 00:09:35.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.895 "dma_device_type": 2 00:09:35.895 } 00:09:35.895 ], 00:09:35.895 "driver_specific": { 00:09:35.895 "raid": { 00:09:35.895 "uuid": "f592abd7-b154-425c-9e56-9066bb512bb8", 00:09:35.895 "strip_size_kb": 64, 00:09:35.895 "state": "online", 00:09:35.895 "raid_level": "concat", 00:09:35.895 "superblock": true, 00:09:35.895 "num_base_bdevs": 2, 00:09:35.895 "num_base_bdevs_discovered": 2, 00:09:35.895 "num_base_bdevs_operational": 2, 00:09:35.895 "base_bdevs_list": [ 00:09:35.895 { 00:09:35.895 "name": "pt1", 00:09:35.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.895 "is_configured": true, 00:09:35.895 "data_offset": 2048, 00:09:35.895 "data_size": 63488 00:09:35.895 }, 00:09:35.895 { 00:09:35.895 "name": "pt2", 00:09:35.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.896 "is_configured": true, 00:09:35.896 "data_offset": 2048, 00:09:35.896 "data_size": 63488 00:09:35.896 } 00:09:35.896 ] 00:09:35.896 } 00:09:35.896 } 00:09:35.896 }' 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:35.896 pt2' 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.896 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:35.896 [2024-10-17 20:06:21.531105] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f592abd7-b154-425c-9e56-9066bb512bb8 '!=' f592abd7-b154-425c-9e56-9066bb512bb8 ']' 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62075 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62075 ']' 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62075 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62075 00:09:36.155 killing process with pid 62075 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62075' 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62075 00:09:36.155 [2024-10-17 20:06:21.618501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.155 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62075 00:09:36.155 [2024-10-17 20:06:21.618603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.155 [2024-10-17 20:06:21.618666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.155 [2024-10-17 20:06:21.618684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:36.155 [2024-10-17 20:06:21.789758] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.532 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:37.532 00:09:37.532 real 0m4.852s 00:09:37.532 user 0m7.178s 00:09:37.532 sys 0m0.719s 00:09:37.532 ************************************ 00:09:37.532 END TEST raid_superblock_test 00:09:37.532 ************************************ 00:09:37.532 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.532 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.532 20:06:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:37.532 20:06:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:37.532 20:06:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.532 20:06:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.532 ************************************ 00:09:37.532 START TEST raid_read_error_test 00:09:37.532 ************************************ 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ndgNYFMGhX 00:09:37.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62287 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62287 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62287 ']' 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:37.532 20:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.532 [2024-10-17 20:06:22.952128] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:09:37.532 [2024-10-17 20:06:22.952608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62287 ] 00:09:37.532 [2024-10-17 20:06:23.130420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.791 [2024-10-17 20:06:23.287865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.049 [2024-10-17 20:06:23.501245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.049 [2024-10-17 20:06:23.501300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.308 20:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.308 20:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:38.308 20:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.308 20:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:38.308 20:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.308 20:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.567 BaseBdev1_malloc 00:09:38.567 20:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.567 20:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:38.567 20:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.567 20:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.567 true 00:09:38.567 20:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.567 20:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:38.567 20:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.567 20:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.567 [2024-10-17 20:06:24.006344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:38.567 [2024-10-17 20:06:24.006442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.567 [2024-10-17 20:06:24.006470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:38.567 [2024-10-17 20:06:24.006488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.567 [2024-10-17 20:06:24.009244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.567 [2024-10-17 20:06:24.009292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:38.567 BaseBdev1 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.567 BaseBdev2_malloc 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.567 true 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.567 [2024-10-17 20:06:24.059782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:38.567 [2024-10-17 20:06:24.059863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.567 [2024-10-17 20:06:24.059887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:38.567 [2024-10-17 20:06:24.059902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.567 [2024-10-17 20:06:24.062663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.567 [2024-10-17 20:06:24.062724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:38.567 BaseBdev2 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.567 [2024-10-17 20:06:24.067859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.567 [2024-10-17 20:06:24.070933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.567 [2024-10-17 20:06:24.071223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:38.567 [2024-10-17 20:06:24.071249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:38.567 [2024-10-17 20:06:24.071592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:38.567 [2024-10-17 20:06:24.071972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:38.567 [2024-10-17 20:06:24.071999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:38.567 [2024-10-17 20:06:24.072318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.567 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.568 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.568 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.568 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.568 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.568 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.568 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.568 "name": "raid_bdev1", 00:09:38.568 "uuid": "12ab40ff-c3fd-44f0-8c1d-07e9fac30d34", 00:09:38.568 "strip_size_kb": 64, 00:09:38.568 "state": "online", 00:09:38.568 "raid_level": "concat", 00:09:38.568 "superblock": true, 00:09:38.568 "num_base_bdevs": 2, 00:09:38.568 "num_base_bdevs_discovered": 2, 00:09:38.568 "num_base_bdevs_operational": 2, 00:09:38.568 "base_bdevs_list": [ 00:09:38.568 { 00:09:38.568 "name": "BaseBdev1", 00:09:38.568 "uuid": "2eddba00-a43e-559e-9f8c-fd4e7b4c4f5f", 00:09:38.568 "is_configured": true, 00:09:38.568 "data_offset": 2048, 00:09:38.568 "data_size": 63488 00:09:38.568 }, 00:09:38.568 { 00:09:38.568 "name": "BaseBdev2", 00:09:38.568 "uuid": "b4ee2c9e-05c5-53b3-b199-01d4fc5735a9", 00:09:38.568 "is_configured": true, 00:09:38.568 "data_offset": 2048, 00:09:38.568 "data_size": 63488 00:09:38.568 } 00:09:38.568 ] 00:09:38.568 }' 00:09:38.568 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.568 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.194 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:39.194 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:39.194 [2024-10-17 20:06:24.653775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.131 "name": "raid_bdev1", 00:09:40.131 "uuid": "12ab40ff-c3fd-44f0-8c1d-07e9fac30d34", 00:09:40.131 "strip_size_kb": 64, 00:09:40.131 "state": "online", 00:09:40.131 "raid_level": "concat", 00:09:40.131 "superblock": true, 00:09:40.131 "num_base_bdevs": 2, 00:09:40.131 "num_base_bdevs_discovered": 2, 00:09:40.131 "num_base_bdevs_operational": 2, 00:09:40.131 "base_bdevs_list": [ 00:09:40.131 { 00:09:40.131 "name": "BaseBdev1", 00:09:40.131 "uuid": "2eddba00-a43e-559e-9f8c-fd4e7b4c4f5f", 00:09:40.131 "is_configured": true, 00:09:40.131 "data_offset": 2048, 00:09:40.131 "data_size": 63488 00:09:40.131 }, 00:09:40.131 { 00:09:40.131 "name": "BaseBdev2", 00:09:40.131 "uuid": "b4ee2c9e-05c5-53b3-b199-01d4fc5735a9", 00:09:40.131 "is_configured": true, 00:09:40.131 "data_offset": 2048, 00:09:40.131 "data_size": 63488 00:09:40.131 } 00:09:40.131 ] 00:09:40.131 }' 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.131 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.699 [2024-10-17 20:06:26.072317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.699 [2024-10-17 20:06:26.072360] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.699 [2024-10-17 20:06:26.075774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.699 [2024-10-17 20:06:26.075827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.699 [2024-10-17 20:06:26.075868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.699 [2024-10-17 20:06:26.075889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:40.699 { 00:09:40.699 "results": [ 00:09:40.699 { 00:09:40.699 "job": "raid_bdev1", 00:09:40.699 "core_mask": "0x1", 00:09:40.699 "workload": "randrw", 00:09:40.699 "percentage": 50, 00:09:40.699 "status": "finished", 00:09:40.699 "queue_depth": 1, 00:09:40.699 "io_size": 131072, 00:09:40.699 "runtime": 1.416, 00:09:40.699 "iops": 11801.553672316384, 00:09:40.699 "mibps": 1475.194209039548, 00:09:40.699 "io_failed": 1, 00:09:40.699 "io_timeout": 0, 00:09:40.699 "avg_latency_us": 118.3468244919274, 00:09:40.699 "min_latency_us": 36.77090909090909, 00:09:40.699 "max_latency_us": 1832.0290909090909 00:09:40.699 } 00:09:40.699 ], 00:09:40.699 "core_count": 1 00:09:40.699 } 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62287 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62287 ']' 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62287 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62287 00:09:40.699 killing process with pid 62287 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62287' 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62287 00:09:40.699 [2024-10-17 20:06:26.113474] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.699 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62287 00:09:40.699 [2024-10-17 20:06:26.220477] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.634 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ndgNYFMGhX 00:09:41.634 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:41.634 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:41.634 ************************************ 00:09:41.634 END TEST raid_read_error_test 00:09:41.634 ************************************ 00:09:41.634 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:41.634 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:41.634 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.634 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:41.634 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:41.634 00:09:41.634 real 0m4.396s 00:09:41.634 user 0m5.548s 00:09:41.634 sys 0m0.528s 00:09:41.634 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.634 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.634 20:06:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:41.634 20:06:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:41.634 20:06:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.634 20:06:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.634 ************************************ 00:09:41.634 START TEST raid_write_error_test 00:09:41.634 ************************************ 00:09:41.634 20:06:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:09:41.634 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:41.634 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:41.634 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:41.891 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:41.892 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:41.892 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:41.892 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MbHeDli4DG 00:09:41.892 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62427 00:09:41.892 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62427 00:09:41.892 20:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:41.892 20:06:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62427 ']' 00:09:41.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.892 20:06:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.892 20:06:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.892 20:06:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.892 20:06:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.892 20:06:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.892 [2024-10-17 20:06:27.407316] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:09:41.892 [2024-10-17 20:06:27.407484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62427 ] 00:09:42.149 [2024-10-17 20:06:27.581807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.149 [2024-10-17 20:06:27.706265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.408 [2024-10-17 20:06:27.897943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.408 [2024-10-17 20:06:27.898018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.974 BaseBdev1_malloc 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.974 true 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.974 [2024-10-17 20:06:28.430618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:42.974 [2024-10-17 20:06:28.430699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.974 [2024-10-17 20:06:28.430728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:42.974 [2024-10-17 20:06:28.430745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.974 [2024-10-17 20:06:28.433634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.974 [2024-10-17 20:06:28.433699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:42.974 BaseBdev1 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.974 BaseBdev2_malloc 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.974 true 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.974 [2024-10-17 20:06:28.487061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:42.974 [2024-10-17 20:06:28.487161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.974 [2024-10-17 20:06:28.487189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:42.974 [2024-10-17 20:06:28.487208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.974 [2024-10-17 20:06:28.490123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.974 [2024-10-17 20:06:28.490174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:42.974 BaseBdev2 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.974 [2024-10-17 20:06:28.495164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.974 [2024-10-17 20:06:28.497689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.974 [2024-10-17 20:06:28.497917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:42.974 [2024-10-17 20:06:28.497941] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:42.974 [2024-10-17 20:06:28.498289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:42.974 [2024-10-17 20:06:28.498513] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:42.974 [2024-10-17 20:06:28.498540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:42.974 [2024-10-17 20:06:28.498743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.974 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.975 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.975 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.975 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.975 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.975 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.975 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.975 "name": "raid_bdev1", 00:09:42.975 "uuid": "7f1eec5e-3ab0-4fee-8fbc-2a8726e8bafd", 00:09:42.975 "strip_size_kb": 64, 00:09:42.975 "state": "online", 00:09:42.975 "raid_level": "concat", 00:09:42.975 "superblock": true, 00:09:42.975 "num_base_bdevs": 2, 00:09:42.975 "num_base_bdevs_discovered": 2, 00:09:42.975 "num_base_bdevs_operational": 2, 00:09:42.975 "base_bdevs_list": [ 00:09:42.975 { 00:09:42.975 "name": "BaseBdev1", 00:09:42.975 "uuid": "030ac860-8f6e-5df5-8c70-db5f3523a85a", 00:09:42.975 "is_configured": true, 00:09:42.975 "data_offset": 2048, 00:09:42.975 "data_size": 63488 00:09:42.975 }, 00:09:42.975 { 00:09:42.975 "name": "BaseBdev2", 00:09:42.975 "uuid": "cd75f5a8-3888-577a-8897-e6d3eac2f1af", 00:09:42.975 "is_configured": true, 00:09:42.975 "data_offset": 2048, 00:09:42.975 "data_size": 63488 00:09:42.975 } 00:09:42.975 ] 00:09:42.975 }' 00:09:42.975 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.975 20:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.541 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:43.541 20:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:43.541 [2024-10-17 20:06:29.132709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.548 "name": "raid_bdev1", 00:09:44.548 "uuid": "7f1eec5e-3ab0-4fee-8fbc-2a8726e8bafd", 00:09:44.548 "strip_size_kb": 64, 00:09:44.548 "state": "online", 00:09:44.548 "raid_level": "concat", 00:09:44.548 "superblock": true, 00:09:44.548 "num_base_bdevs": 2, 00:09:44.548 "num_base_bdevs_discovered": 2, 00:09:44.548 "num_base_bdevs_operational": 2, 00:09:44.548 "base_bdevs_list": [ 00:09:44.548 { 00:09:44.548 "name": "BaseBdev1", 00:09:44.548 "uuid": "030ac860-8f6e-5df5-8c70-db5f3523a85a", 00:09:44.548 "is_configured": true, 00:09:44.548 "data_offset": 2048, 00:09:44.548 "data_size": 63488 00:09:44.548 }, 00:09:44.548 { 00:09:44.548 "name": "BaseBdev2", 00:09:44.548 "uuid": "cd75f5a8-3888-577a-8897-e6d3eac2f1af", 00:09:44.548 "is_configured": true, 00:09:44.548 "data_offset": 2048, 00:09:44.548 "data_size": 63488 00:09:44.548 } 00:09:44.548 ] 00:09:44.548 }' 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.548 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.115 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.115 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.115 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.115 [2024-10-17 20:06:30.542798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.115 [2024-10-17 20:06:30.542838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.115 [2024-10-17 20:06:30.546321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.115 [2024-10-17 20:06:30.546530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.115 [2024-10-17 20:06:30.546592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.115 [2024-10-17 20:06:30.546616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:45.115 { 00:09:45.115 "results": [ 00:09:45.115 { 00:09:45.115 "job": "raid_bdev1", 00:09:45.115 "core_mask": "0x1", 00:09:45.115 "workload": "randrw", 00:09:45.115 "percentage": 50, 00:09:45.115 "status": "finished", 00:09:45.115 "queue_depth": 1, 00:09:45.115 "io_size": 131072, 00:09:45.115 "runtime": 1.407561, 00:09:45.115 "iops": 11391.33579290702, 00:09:45.115 "mibps": 1423.9169741133776, 00:09:45.115 "io_failed": 1, 00:09:45.115 "io_timeout": 0, 00:09:45.115 "avg_latency_us": 122.4864100688834, 00:09:45.115 "min_latency_us": 37.46909090909091, 00:09:45.115 "max_latency_us": 1876.7127272727273 00:09:45.115 } 00:09:45.115 ], 00:09:45.115 "core_count": 1 00:09:45.115 } 00:09:45.115 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.115 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62427 00:09:45.115 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62427 ']' 00:09:45.115 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62427 00:09:45.116 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:45.116 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.116 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62427 00:09:45.116 killing process with pid 62427 00:09:45.116 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.116 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.116 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62427' 00:09:45.116 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62427 00:09:45.116 [2024-10-17 20:06:30.582628] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.116 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62427 00:09:45.116 [2024-10-17 20:06:30.699719] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.492 20:06:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MbHeDli4DG 00:09:46.492 20:06:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:46.492 20:06:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:46.492 20:06:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:46.492 20:06:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:46.492 20:06:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.492 20:06:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:46.492 ************************************ 00:09:46.492 END TEST raid_write_error_test 00:09:46.492 ************************************ 00:09:46.492 20:06:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:46.492 00:09:46.492 real 0m4.445s 00:09:46.492 user 0m5.614s 00:09:46.492 sys 0m0.537s 00:09:46.492 20:06:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.492 20:06:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.492 20:06:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:46.492 20:06:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:46.492 20:06:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:46.492 20:06:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.492 20:06:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.492 ************************************ 00:09:46.492 START TEST raid_state_function_test 00:09:46.492 ************************************ 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.492 Process raid pid: 62571 00:09:46.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62571 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62571' 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62571 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62571 ']' 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.492 20:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.492 [2024-10-17 20:06:31.917605] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:09:46.492 [2024-10-17 20:06:31.917788] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.492 [2024-10-17 20:06:32.097069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.751 [2024-10-17 20:06:32.221931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.009 [2024-10-17 20:06:32.416107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.010 [2024-10-17 20:06:32.416158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.268 [2024-10-17 20:06:32.863247] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.268 [2024-10-17 20:06:32.863313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.268 [2024-10-17 20:06:32.863331] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.268 [2024-10-17 20:06:32.863363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.268 20:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.526 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.526 "name": "Existed_Raid", 00:09:47.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.526 "strip_size_kb": 0, 00:09:47.526 "state": "configuring", 00:09:47.526 "raid_level": "raid1", 00:09:47.526 "superblock": false, 00:09:47.526 "num_base_bdevs": 2, 00:09:47.526 "num_base_bdevs_discovered": 0, 00:09:47.526 "num_base_bdevs_operational": 2, 00:09:47.526 "base_bdevs_list": [ 00:09:47.526 { 00:09:47.526 "name": "BaseBdev1", 00:09:47.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.526 "is_configured": false, 00:09:47.526 "data_offset": 0, 00:09:47.526 "data_size": 0 00:09:47.526 }, 00:09:47.526 { 00:09:47.526 "name": "BaseBdev2", 00:09:47.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.526 "is_configured": false, 00:09:47.526 "data_offset": 0, 00:09:47.526 "data_size": 0 00:09:47.526 } 00:09:47.526 ] 00:09:47.526 }' 00:09:47.526 20:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.526 20:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.785 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.785 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.785 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.785 [2024-10-17 20:06:33.395314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.785 [2024-10-17 20:06:33.395574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:47.785 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.785 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:47.785 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.786 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.786 [2024-10-17 20:06:33.403339] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.786 [2024-10-17 20:06:33.403434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.786 [2024-10-17 20:06:33.403452] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.786 [2024-10-17 20:06:33.403471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.786 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.786 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:47.786 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.786 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.044 [2024-10-17 20:06:33.446782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.044 BaseBdev1 00:09:48.044 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.044 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:48.044 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:48.044 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.044 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:48.044 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.044 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.044 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.044 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.044 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.044 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.045 [ 00:09:48.045 { 00:09:48.045 "name": "BaseBdev1", 00:09:48.045 "aliases": [ 00:09:48.045 "777d0a3b-dfdb-463d-a4f9-dea6b5f88615" 00:09:48.045 ], 00:09:48.045 "product_name": "Malloc disk", 00:09:48.045 "block_size": 512, 00:09:48.045 "num_blocks": 65536, 00:09:48.045 "uuid": "777d0a3b-dfdb-463d-a4f9-dea6b5f88615", 00:09:48.045 "assigned_rate_limits": { 00:09:48.045 "rw_ios_per_sec": 0, 00:09:48.045 "rw_mbytes_per_sec": 0, 00:09:48.045 "r_mbytes_per_sec": 0, 00:09:48.045 "w_mbytes_per_sec": 0 00:09:48.045 }, 00:09:48.045 "claimed": true, 00:09:48.045 "claim_type": "exclusive_write", 00:09:48.045 "zoned": false, 00:09:48.045 "supported_io_types": { 00:09:48.045 "read": true, 00:09:48.045 "write": true, 00:09:48.045 "unmap": true, 00:09:48.045 "flush": true, 00:09:48.045 "reset": true, 00:09:48.045 "nvme_admin": false, 00:09:48.045 "nvme_io": false, 00:09:48.045 "nvme_io_md": false, 00:09:48.045 "write_zeroes": true, 00:09:48.045 "zcopy": true, 00:09:48.045 "get_zone_info": false, 00:09:48.045 "zone_management": false, 00:09:48.045 "zone_append": false, 00:09:48.045 "compare": false, 00:09:48.045 "compare_and_write": false, 00:09:48.045 "abort": true, 00:09:48.045 "seek_hole": false, 00:09:48.045 "seek_data": false, 00:09:48.045 "copy": true, 00:09:48.045 "nvme_iov_md": false 00:09:48.045 }, 00:09:48.045 "memory_domains": [ 00:09:48.045 { 00:09:48.045 "dma_device_id": "system", 00:09:48.045 "dma_device_type": 1 00:09:48.045 }, 00:09:48.045 { 00:09:48.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.045 "dma_device_type": 2 00:09:48.045 } 00:09:48.045 ], 00:09:48.045 "driver_specific": {} 00:09:48.045 } 00:09:48.045 ] 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.045 "name": "Existed_Raid", 00:09:48.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.045 "strip_size_kb": 0, 00:09:48.045 "state": "configuring", 00:09:48.045 "raid_level": "raid1", 00:09:48.045 "superblock": false, 00:09:48.045 "num_base_bdevs": 2, 00:09:48.045 "num_base_bdevs_discovered": 1, 00:09:48.045 "num_base_bdevs_operational": 2, 00:09:48.045 "base_bdevs_list": [ 00:09:48.045 { 00:09:48.045 "name": "BaseBdev1", 00:09:48.045 "uuid": "777d0a3b-dfdb-463d-a4f9-dea6b5f88615", 00:09:48.045 "is_configured": true, 00:09:48.045 "data_offset": 0, 00:09:48.045 "data_size": 65536 00:09:48.045 }, 00:09:48.045 { 00:09:48.045 "name": "BaseBdev2", 00:09:48.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.045 "is_configured": false, 00:09:48.045 "data_offset": 0, 00:09:48.045 "data_size": 0 00:09:48.045 } 00:09:48.045 ] 00:09:48.045 }' 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.045 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.613 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:48.613 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.613 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.613 [2024-10-17 20:06:34.002982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.613 [2024-10-17 20:06:34.003074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.613 [2024-10-17 20:06:34.011075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.613 [2024-10-17 20:06:34.013708] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.613 [2024-10-17 20:06:34.013773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.613 "name": "Existed_Raid", 00:09:48.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.613 "strip_size_kb": 0, 00:09:48.613 "state": "configuring", 00:09:48.613 "raid_level": "raid1", 00:09:48.613 "superblock": false, 00:09:48.613 "num_base_bdevs": 2, 00:09:48.613 "num_base_bdevs_discovered": 1, 00:09:48.613 "num_base_bdevs_operational": 2, 00:09:48.613 "base_bdevs_list": [ 00:09:48.613 { 00:09:48.613 "name": "BaseBdev1", 00:09:48.613 "uuid": "777d0a3b-dfdb-463d-a4f9-dea6b5f88615", 00:09:48.613 "is_configured": true, 00:09:48.613 "data_offset": 0, 00:09:48.613 "data_size": 65536 00:09:48.613 }, 00:09:48.613 { 00:09:48.613 "name": "BaseBdev2", 00:09:48.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.613 "is_configured": false, 00:09:48.613 "data_offset": 0, 00:09:48.613 "data_size": 0 00:09:48.613 } 00:09:48.613 ] 00:09:48.613 }' 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.613 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.895 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:48.895 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.895 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.154 [2024-10-17 20:06:34.574291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.154 [2024-10-17 20:06:34.574410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:49.154 [2024-10-17 20:06:34.574423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:49.154 [2024-10-17 20:06:34.574762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:49.154 [2024-10-17 20:06:34.574990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:49.154 [2024-10-17 20:06:34.575035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:49.154 [2024-10-17 20:06:34.575370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.154 BaseBdev2 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.154 [ 00:09:49.154 { 00:09:49.154 "name": "BaseBdev2", 00:09:49.154 "aliases": [ 00:09:49.154 "60f476de-a8ce-412a-bfa7-cff82928278f" 00:09:49.154 ], 00:09:49.154 "product_name": "Malloc disk", 00:09:49.154 "block_size": 512, 00:09:49.154 "num_blocks": 65536, 00:09:49.154 "uuid": "60f476de-a8ce-412a-bfa7-cff82928278f", 00:09:49.154 "assigned_rate_limits": { 00:09:49.154 "rw_ios_per_sec": 0, 00:09:49.154 "rw_mbytes_per_sec": 0, 00:09:49.154 "r_mbytes_per_sec": 0, 00:09:49.154 "w_mbytes_per_sec": 0 00:09:49.154 }, 00:09:49.154 "claimed": true, 00:09:49.154 "claim_type": "exclusive_write", 00:09:49.154 "zoned": false, 00:09:49.154 "supported_io_types": { 00:09:49.154 "read": true, 00:09:49.154 "write": true, 00:09:49.154 "unmap": true, 00:09:49.154 "flush": true, 00:09:49.154 "reset": true, 00:09:49.154 "nvme_admin": false, 00:09:49.154 "nvme_io": false, 00:09:49.154 "nvme_io_md": false, 00:09:49.154 "write_zeroes": true, 00:09:49.154 "zcopy": true, 00:09:49.154 "get_zone_info": false, 00:09:49.154 "zone_management": false, 00:09:49.154 "zone_append": false, 00:09:49.154 "compare": false, 00:09:49.154 "compare_and_write": false, 00:09:49.154 "abort": true, 00:09:49.154 "seek_hole": false, 00:09:49.154 "seek_data": false, 00:09:49.154 "copy": true, 00:09:49.154 "nvme_iov_md": false 00:09:49.154 }, 00:09:49.154 "memory_domains": [ 00:09:49.154 { 00:09:49.154 "dma_device_id": "system", 00:09:49.154 "dma_device_type": 1 00:09:49.154 }, 00:09:49.154 { 00:09:49.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.154 "dma_device_type": 2 00:09:49.154 } 00:09:49.154 ], 00:09:49.154 "driver_specific": {} 00:09:49.154 } 00:09:49.154 ] 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.154 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.155 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.155 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.155 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.155 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.155 "name": "Existed_Raid", 00:09:49.155 "uuid": "2b1583d3-3988-468e-9717-ea279bc20446", 00:09:49.155 "strip_size_kb": 0, 00:09:49.155 "state": "online", 00:09:49.155 "raid_level": "raid1", 00:09:49.155 "superblock": false, 00:09:49.155 "num_base_bdevs": 2, 00:09:49.155 "num_base_bdevs_discovered": 2, 00:09:49.155 "num_base_bdevs_operational": 2, 00:09:49.155 "base_bdevs_list": [ 00:09:49.155 { 00:09:49.155 "name": "BaseBdev1", 00:09:49.155 "uuid": "777d0a3b-dfdb-463d-a4f9-dea6b5f88615", 00:09:49.155 "is_configured": true, 00:09:49.155 "data_offset": 0, 00:09:49.155 "data_size": 65536 00:09:49.155 }, 00:09:49.155 { 00:09:49.155 "name": "BaseBdev2", 00:09:49.155 "uuid": "60f476de-a8ce-412a-bfa7-cff82928278f", 00:09:49.155 "is_configured": true, 00:09:49.155 "data_offset": 0, 00:09:49.155 "data_size": 65536 00:09:49.155 } 00:09:49.155 ] 00:09:49.155 }' 00:09:49.155 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.155 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.721 [2024-10-17 20:06:35.142851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.721 "name": "Existed_Raid", 00:09:49.721 "aliases": [ 00:09:49.721 "2b1583d3-3988-468e-9717-ea279bc20446" 00:09:49.721 ], 00:09:49.721 "product_name": "Raid Volume", 00:09:49.721 "block_size": 512, 00:09:49.721 "num_blocks": 65536, 00:09:49.721 "uuid": "2b1583d3-3988-468e-9717-ea279bc20446", 00:09:49.721 "assigned_rate_limits": { 00:09:49.721 "rw_ios_per_sec": 0, 00:09:49.721 "rw_mbytes_per_sec": 0, 00:09:49.721 "r_mbytes_per_sec": 0, 00:09:49.721 "w_mbytes_per_sec": 0 00:09:49.721 }, 00:09:49.721 "claimed": false, 00:09:49.721 "zoned": false, 00:09:49.721 "supported_io_types": { 00:09:49.721 "read": true, 00:09:49.721 "write": true, 00:09:49.721 "unmap": false, 00:09:49.721 "flush": false, 00:09:49.721 "reset": true, 00:09:49.721 "nvme_admin": false, 00:09:49.721 "nvme_io": false, 00:09:49.721 "nvme_io_md": false, 00:09:49.721 "write_zeroes": true, 00:09:49.721 "zcopy": false, 00:09:49.721 "get_zone_info": false, 00:09:49.721 "zone_management": false, 00:09:49.721 "zone_append": false, 00:09:49.721 "compare": false, 00:09:49.721 "compare_and_write": false, 00:09:49.721 "abort": false, 00:09:49.721 "seek_hole": false, 00:09:49.721 "seek_data": false, 00:09:49.721 "copy": false, 00:09:49.721 "nvme_iov_md": false 00:09:49.721 }, 00:09:49.721 "memory_domains": [ 00:09:49.721 { 00:09:49.721 "dma_device_id": "system", 00:09:49.721 "dma_device_type": 1 00:09:49.721 }, 00:09:49.721 { 00:09:49.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.721 "dma_device_type": 2 00:09:49.721 }, 00:09:49.721 { 00:09:49.721 "dma_device_id": "system", 00:09:49.721 "dma_device_type": 1 00:09:49.721 }, 00:09:49.721 { 00:09:49.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.721 "dma_device_type": 2 00:09:49.721 } 00:09:49.721 ], 00:09:49.721 "driver_specific": { 00:09:49.721 "raid": { 00:09:49.721 "uuid": "2b1583d3-3988-468e-9717-ea279bc20446", 00:09:49.721 "strip_size_kb": 0, 00:09:49.721 "state": "online", 00:09:49.721 "raid_level": "raid1", 00:09:49.721 "superblock": false, 00:09:49.721 "num_base_bdevs": 2, 00:09:49.721 "num_base_bdevs_discovered": 2, 00:09:49.721 "num_base_bdevs_operational": 2, 00:09:49.721 "base_bdevs_list": [ 00:09:49.721 { 00:09:49.721 "name": "BaseBdev1", 00:09:49.721 "uuid": "777d0a3b-dfdb-463d-a4f9-dea6b5f88615", 00:09:49.721 "is_configured": true, 00:09:49.721 "data_offset": 0, 00:09:49.721 "data_size": 65536 00:09:49.721 }, 00:09:49.721 { 00:09:49.721 "name": "BaseBdev2", 00:09:49.721 "uuid": "60f476de-a8ce-412a-bfa7-cff82928278f", 00:09:49.721 "is_configured": true, 00:09:49.721 "data_offset": 0, 00:09:49.721 "data_size": 65536 00:09:49.721 } 00:09:49.721 ] 00:09:49.721 } 00:09:49.721 } 00:09:49.721 }' 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:49.721 BaseBdev2' 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:49.721 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.722 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.722 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.722 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.722 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.722 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.722 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:49.722 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.722 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.722 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.722 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.980 [2024-10-17 20:06:35.402616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.980 "name": "Existed_Raid", 00:09:49.980 "uuid": "2b1583d3-3988-468e-9717-ea279bc20446", 00:09:49.980 "strip_size_kb": 0, 00:09:49.980 "state": "online", 00:09:49.980 "raid_level": "raid1", 00:09:49.980 "superblock": false, 00:09:49.980 "num_base_bdevs": 2, 00:09:49.980 "num_base_bdevs_discovered": 1, 00:09:49.980 "num_base_bdevs_operational": 1, 00:09:49.980 "base_bdevs_list": [ 00:09:49.980 { 00:09:49.980 "name": null, 00:09:49.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.980 "is_configured": false, 00:09:49.980 "data_offset": 0, 00:09:49.980 "data_size": 65536 00:09:49.980 }, 00:09:49.980 { 00:09:49.980 "name": "BaseBdev2", 00:09:49.980 "uuid": "60f476de-a8ce-412a-bfa7-cff82928278f", 00:09:49.980 "is_configured": true, 00:09:49.980 "data_offset": 0, 00:09:49.980 "data_size": 65536 00:09:49.980 } 00:09:49.980 ] 00:09:49.980 }' 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.980 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.546 [2024-10-17 20:06:36.093550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:50.546 [2024-10-17 20:06:36.093677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.546 [2024-10-17 20:06:36.177130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.546 [2024-10-17 20:06:36.177194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.546 [2024-10-17 20:06:36.177214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.546 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62571 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62571 ']' 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62571 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62571 00:09:50.805 killing process with pid 62571 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62571' 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62571 00:09:50.805 [2024-10-17 20:06:36.269188] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.805 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62571 00:09:50.805 [2024-10-17 20:06:36.283566] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:51.739 00:09:51.739 real 0m5.448s 00:09:51.739 user 0m8.292s 00:09:51.739 sys 0m0.790s 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.739 ************************************ 00:09:51.739 END TEST raid_state_function_test 00:09:51.739 ************************************ 00:09:51.739 20:06:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:51.739 20:06:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:51.739 20:06:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.739 20:06:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.739 ************************************ 00:09:51.739 START TEST raid_state_function_test_sb 00:09:51.739 ************************************ 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:51.739 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62829 00:09:51.740 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62829' 00:09:51.740 Process raid pid: 62829 00:09:51.740 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:51.740 20:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62829 00:09:51.740 20:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62829 ']' 00:09:51.740 20:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.740 20:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.740 20:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.740 20:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.740 20:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.998 [2024-10-17 20:06:37.401370] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:09:51.998 [2024-10-17 20:06:37.401894] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.998 [2024-10-17 20:06:37.579700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.256 [2024-10-17 20:06:37.706953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.514 [2024-10-17 20:06:37.908853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.514 [2024-10-17 20:06:37.909284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.773 [2024-10-17 20:06:38.415646] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.773 [2024-10-17 20:06:38.415873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.773 [2024-10-17 20:06:38.415900] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.773 [2024-10-17 20:06:38.415918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.773 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.050 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.050 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.050 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.050 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.050 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.050 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.050 "name": "Existed_Raid", 00:09:53.050 "uuid": "281f553e-d4d3-4f2f-9ef9-57dd568505e6", 00:09:53.050 "strip_size_kb": 0, 00:09:53.050 "state": "configuring", 00:09:53.050 "raid_level": "raid1", 00:09:53.050 "superblock": true, 00:09:53.050 "num_base_bdevs": 2, 00:09:53.050 "num_base_bdevs_discovered": 0, 00:09:53.050 "num_base_bdevs_operational": 2, 00:09:53.050 "base_bdevs_list": [ 00:09:53.050 { 00:09:53.050 "name": "BaseBdev1", 00:09:53.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.050 "is_configured": false, 00:09:53.050 "data_offset": 0, 00:09:53.050 "data_size": 0 00:09:53.050 }, 00:09:53.050 { 00:09:53.050 "name": "BaseBdev2", 00:09:53.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.050 "is_configured": false, 00:09:53.050 "data_offset": 0, 00:09:53.050 "data_size": 0 00:09:53.050 } 00:09:53.050 ] 00:09:53.050 }' 00:09:53.050 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.050 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.340 [2024-10-17 20:06:38.927739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.340 [2024-10-17 20:06:38.927783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.340 [2024-10-17 20:06:38.939753] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.340 [2024-10-17 20:06:38.939972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.340 [2024-10-17 20:06:38.940124] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.340 [2024-10-17 20:06:38.940209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.340 [2024-10-17 20:06:38.982972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.340 BaseBdev1 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.340 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.599 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.599 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.599 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.599 20:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.599 [ 00:09:53.599 { 00:09:53.599 "name": "BaseBdev1", 00:09:53.599 "aliases": [ 00:09:53.599 "da904aeb-a43f-4eec-adcd-c001c88610f4" 00:09:53.599 ], 00:09:53.599 "product_name": "Malloc disk", 00:09:53.599 "block_size": 512, 00:09:53.599 "num_blocks": 65536, 00:09:53.599 "uuid": "da904aeb-a43f-4eec-adcd-c001c88610f4", 00:09:53.599 "assigned_rate_limits": { 00:09:53.599 "rw_ios_per_sec": 0, 00:09:53.599 "rw_mbytes_per_sec": 0, 00:09:53.599 "r_mbytes_per_sec": 0, 00:09:53.599 "w_mbytes_per_sec": 0 00:09:53.599 }, 00:09:53.599 "claimed": true, 00:09:53.599 "claim_type": "exclusive_write", 00:09:53.599 "zoned": false, 00:09:53.599 "supported_io_types": { 00:09:53.599 "read": true, 00:09:53.599 "write": true, 00:09:53.599 "unmap": true, 00:09:53.599 "flush": true, 00:09:53.599 "reset": true, 00:09:53.599 "nvme_admin": false, 00:09:53.599 "nvme_io": false, 00:09:53.599 "nvme_io_md": false, 00:09:53.599 "write_zeroes": true, 00:09:53.599 "zcopy": true, 00:09:53.599 "get_zone_info": false, 00:09:53.599 "zone_management": false, 00:09:53.599 "zone_append": false, 00:09:53.599 "compare": false, 00:09:53.599 "compare_and_write": false, 00:09:53.599 "abort": true, 00:09:53.599 "seek_hole": false, 00:09:53.599 "seek_data": false, 00:09:53.599 "copy": true, 00:09:53.599 "nvme_iov_md": false 00:09:53.599 }, 00:09:53.599 "memory_domains": [ 00:09:53.599 { 00:09:53.599 "dma_device_id": "system", 00:09:53.599 "dma_device_type": 1 00:09:53.599 }, 00:09:53.599 { 00:09:53.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.599 "dma_device_type": 2 00:09:53.599 } 00:09:53.599 ], 00:09:53.599 "driver_specific": {} 00:09:53.599 } 00:09:53.599 ] 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.599 "name": "Existed_Raid", 00:09:53.599 "uuid": "c874cb6c-90cc-4797-bde8-d8f37f69a0fa", 00:09:53.599 "strip_size_kb": 0, 00:09:53.599 "state": "configuring", 00:09:53.599 "raid_level": "raid1", 00:09:53.599 "superblock": true, 00:09:53.599 "num_base_bdevs": 2, 00:09:53.599 "num_base_bdevs_discovered": 1, 00:09:53.599 "num_base_bdevs_operational": 2, 00:09:53.599 "base_bdevs_list": [ 00:09:53.599 { 00:09:53.599 "name": "BaseBdev1", 00:09:53.599 "uuid": "da904aeb-a43f-4eec-adcd-c001c88610f4", 00:09:53.599 "is_configured": true, 00:09:53.599 "data_offset": 2048, 00:09:53.599 "data_size": 63488 00:09:53.599 }, 00:09:53.599 { 00:09:53.599 "name": "BaseBdev2", 00:09:53.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.599 "is_configured": false, 00:09:53.599 "data_offset": 0, 00:09:53.599 "data_size": 0 00:09:53.599 } 00:09:53.599 ] 00:09:53.599 }' 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.599 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.167 [2024-10-17 20:06:39.535257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.167 [2024-10-17 20:06:39.535322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.167 [2024-10-17 20:06:39.547280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.167 [2024-10-17 20:06:39.550121] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.167 [2024-10-17 20:06:39.550308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.167 "name": "Existed_Raid", 00:09:54.167 "uuid": "f005fdec-a993-4fba-8419-8d6c935063a8", 00:09:54.167 "strip_size_kb": 0, 00:09:54.167 "state": "configuring", 00:09:54.167 "raid_level": "raid1", 00:09:54.167 "superblock": true, 00:09:54.167 "num_base_bdevs": 2, 00:09:54.167 "num_base_bdevs_discovered": 1, 00:09:54.167 "num_base_bdevs_operational": 2, 00:09:54.167 "base_bdevs_list": [ 00:09:54.167 { 00:09:54.167 "name": "BaseBdev1", 00:09:54.167 "uuid": "da904aeb-a43f-4eec-adcd-c001c88610f4", 00:09:54.167 "is_configured": true, 00:09:54.167 "data_offset": 2048, 00:09:54.167 "data_size": 63488 00:09:54.167 }, 00:09:54.167 { 00:09:54.167 "name": "BaseBdev2", 00:09:54.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.167 "is_configured": false, 00:09:54.167 "data_offset": 0, 00:09:54.167 "data_size": 0 00:09:54.167 } 00:09:54.167 ] 00:09:54.167 }' 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.167 20:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.425 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:54.425 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.425 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.685 [2024-10-17 20:06:40.096309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.685 [2024-10-17 20:06:40.096652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:54.685 [2024-10-17 20:06:40.096670] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:54.685 BaseBdev2 00:09:54.685 [2024-10-17 20:06:40.096983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:54.685 [2024-10-17 20:06:40.097190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:54.685 [2024-10-17 20:06:40.097210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:54.685 [2024-10-17 20:06:40.097430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.685 [ 00:09:54.685 { 00:09:54.685 "name": "BaseBdev2", 00:09:54.685 "aliases": [ 00:09:54.685 "f8ef7c6f-9bc5-49c2-ab6a-7690763ce67f" 00:09:54.685 ], 00:09:54.685 "product_name": "Malloc disk", 00:09:54.685 "block_size": 512, 00:09:54.685 "num_blocks": 65536, 00:09:54.685 "uuid": "f8ef7c6f-9bc5-49c2-ab6a-7690763ce67f", 00:09:54.685 "assigned_rate_limits": { 00:09:54.685 "rw_ios_per_sec": 0, 00:09:54.685 "rw_mbytes_per_sec": 0, 00:09:54.685 "r_mbytes_per_sec": 0, 00:09:54.685 "w_mbytes_per_sec": 0 00:09:54.685 }, 00:09:54.685 "claimed": true, 00:09:54.685 "claim_type": "exclusive_write", 00:09:54.685 "zoned": false, 00:09:54.685 "supported_io_types": { 00:09:54.685 "read": true, 00:09:54.685 "write": true, 00:09:54.685 "unmap": true, 00:09:54.685 "flush": true, 00:09:54.685 "reset": true, 00:09:54.685 "nvme_admin": false, 00:09:54.685 "nvme_io": false, 00:09:54.685 "nvme_io_md": false, 00:09:54.685 "write_zeroes": true, 00:09:54.685 "zcopy": true, 00:09:54.685 "get_zone_info": false, 00:09:54.685 "zone_management": false, 00:09:54.685 "zone_append": false, 00:09:54.685 "compare": false, 00:09:54.685 "compare_and_write": false, 00:09:54.685 "abort": true, 00:09:54.685 "seek_hole": false, 00:09:54.685 "seek_data": false, 00:09:54.685 "copy": true, 00:09:54.685 "nvme_iov_md": false 00:09:54.685 }, 00:09:54.685 "memory_domains": [ 00:09:54.685 { 00:09:54.685 "dma_device_id": "system", 00:09:54.685 "dma_device_type": 1 00:09:54.685 }, 00:09:54.685 { 00:09:54.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.685 "dma_device_type": 2 00:09:54.685 } 00:09:54.685 ], 00:09:54.685 "driver_specific": {} 00:09:54.685 } 00:09:54.685 ] 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.685 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.686 "name": "Existed_Raid", 00:09:54.686 "uuid": "f005fdec-a993-4fba-8419-8d6c935063a8", 00:09:54.686 "strip_size_kb": 0, 00:09:54.686 "state": "online", 00:09:54.686 "raid_level": "raid1", 00:09:54.686 "superblock": true, 00:09:54.686 "num_base_bdevs": 2, 00:09:54.686 "num_base_bdevs_discovered": 2, 00:09:54.686 "num_base_bdevs_operational": 2, 00:09:54.686 "base_bdevs_list": [ 00:09:54.686 { 00:09:54.686 "name": "BaseBdev1", 00:09:54.686 "uuid": "da904aeb-a43f-4eec-adcd-c001c88610f4", 00:09:54.686 "is_configured": true, 00:09:54.686 "data_offset": 2048, 00:09:54.686 "data_size": 63488 00:09:54.686 }, 00:09:54.686 { 00:09:54.686 "name": "BaseBdev2", 00:09:54.686 "uuid": "f8ef7c6f-9bc5-49c2-ab6a-7690763ce67f", 00:09:54.686 "is_configured": true, 00:09:54.686 "data_offset": 2048, 00:09:54.686 "data_size": 63488 00:09:54.686 } 00:09:54.686 ] 00:09:54.686 }' 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.686 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.254 [2024-10-17 20:06:40.660909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.254 "name": "Existed_Raid", 00:09:55.254 "aliases": [ 00:09:55.254 "f005fdec-a993-4fba-8419-8d6c935063a8" 00:09:55.254 ], 00:09:55.254 "product_name": "Raid Volume", 00:09:55.254 "block_size": 512, 00:09:55.254 "num_blocks": 63488, 00:09:55.254 "uuid": "f005fdec-a993-4fba-8419-8d6c935063a8", 00:09:55.254 "assigned_rate_limits": { 00:09:55.254 "rw_ios_per_sec": 0, 00:09:55.254 "rw_mbytes_per_sec": 0, 00:09:55.254 "r_mbytes_per_sec": 0, 00:09:55.254 "w_mbytes_per_sec": 0 00:09:55.254 }, 00:09:55.254 "claimed": false, 00:09:55.254 "zoned": false, 00:09:55.254 "supported_io_types": { 00:09:55.254 "read": true, 00:09:55.254 "write": true, 00:09:55.254 "unmap": false, 00:09:55.254 "flush": false, 00:09:55.254 "reset": true, 00:09:55.254 "nvme_admin": false, 00:09:55.254 "nvme_io": false, 00:09:55.254 "nvme_io_md": false, 00:09:55.254 "write_zeroes": true, 00:09:55.254 "zcopy": false, 00:09:55.254 "get_zone_info": false, 00:09:55.254 "zone_management": false, 00:09:55.254 "zone_append": false, 00:09:55.254 "compare": false, 00:09:55.254 "compare_and_write": false, 00:09:55.254 "abort": false, 00:09:55.254 "seek_hole": false, 00:09:55.254 "seek_data": false, 00:09:55.254 "copy": false, 00:09:55.254 "nvme_iov_md": false 00:09:55.254 }, 00:09:55.254 "memory_domains": [ 00:09:55.254 { 00:09:55.254 "dma_device_id": "system", 00:09:55.254 "dma_device_type": 1 00:09:55.254 }, 00:09:55.254 { 00:09:55.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.254 "dma_device_type": 2 00:09:55.254 }, 00:09:55.254 { 00:09:55.254 "dma_device_id": "system", 00:09:55.254 "dma_device_type": 1 00:09:55.254 }, 00:09:55.254 { 00:09:55.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.254 "dma_device_type": 2 00:09:55.254 } 00:09:55.254 ], 00:09:55.254 "driver_specific": { 00:09:55.254 "raid": { 00:09:55.254 "uuid": "f005fdec-a993-4fba-8419-8d6c935063a8", 00:09:55.254 "strip_size_kb": 0, 00:09:55.254 "state": "online", 00:09:55.254 "raid_level": "raid1", 00:09:55.254 "superblock": true, 00:09:55.254 "num_base_bdevs": 2, 00:09:55.254 "num_base_bdevs_discovered": 2, 00:09:55.254 "num_base_bdevs_operational": 2, 00:09:55.254 "base_bdevs_list": [ 00:09:55.254 { 00:09:55.254 "name": "BaseBdev1", 00:09:55.254 "uuid": "da904aeb-a43f-4eec-adcd-c001c88610f4", 00:09:55.254 "is_configured": true, 00:09:55.254 "data_offset": 2048, 00:09:55.254 "data_size": 63488 00:09:55.254 }, 00:09:55.254 { 00:09:55.254 "name": "BaseBdev2", 00:09:55.254 "uuid": "f8ef7c6f-9bc5-49c2-ab6a-7690763ce67f", 00:09:55.254 "is_configured": true, 00:09:55.254 "data_offset": 2048, 00:09:55.254 "data_size": 63488 00:09:55.254 } 00:09:55.254 ] 00:09:55.254 } 00:09:55.254 } 00:09:55.254 }' 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:55.254 BaseBdev2' 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.254 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.513 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.513 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.513 20:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:55.513 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.513 20:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.513 [2024-10-17 20:06:40.924633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:55.513 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.513 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:55.513 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.514 "name": "Existed_Raid", 00:09:55.514 "uuid": "f005fdec-a993-4fba-8419-8d6c935063a8", 00:09:55.514 "strip_size_kb": 0, 00:09:55.514 "state": "online", 00:09:55.514 "raid_level": "raid1", 00:09:55.514 "superblock": true, 00:09:55.514 "num_base_bdevs": 2, 00:09:55.514 "num_base_bdevs_discovered": 1, 00:09:55.514 "num_base_bdevs_operational": 1, 00:09:55.514 "base_bdevs_list": [ 00:09:55.514 { 00:09:55.514 "name": null, 00:09:55.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.514 "is_configured": false, 00:09:55.514 "data_offset": 0, 00:09:55.514 "data_size": 63488 00:09:55.514 }, 00:09:55.514 { 00:09:55.514 "name": "BaseBdev2", 00:09:55.514 "uuid": "f8ef7c6f-9bc5-49c2-ab6a-7690763ce67f", 00:09:55.514 "is_configured": true, 00:09:55.514 "data_offset": 2048, 00:09:55.514 "data_size": 63488 00:09:55.514 } 00:09:55.514 ] 00:09:55.514 }' 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.514 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.081 [2024-10-17 20:06:41.603973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:56.081 [2024-10-17 20:06:41.604318] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.081 [2024-10-17 20:06:41.687673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.081 [2024-10-17 20:06:41.687743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.081 [2024-10-17 20:06:41.687761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.081 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:56.082 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.082 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.082 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.082 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.082 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:56.082 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62829 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62829 ']' 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62829 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62829 00:09:56.340 killing process with pid 62829 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62829' 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62829 00:09:56.340 [2024-10-17 20:06:41.777329] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.340 20:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62829 00:09:56.340 [2024-10-17 20:06:41.792136] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.295 20:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:57.295 00:09:57.295 real 0m5.461s 00:09:57.295 user 0m8.284s 00:09:57.295 sys 0m0.812s 00:09:57.295 20:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.295 ************************************ 00:09:57.295 END TEST raid_state_function_test_sb 00:09:57.295 ************************************ 00:09:57.295 20:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.295 20:06:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:57.295 20:06:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:57.295 20:06:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.295 20:06:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.295 ************************************ 00:09:57.295 START TEST raid_superblock_test 00:09:57.295 ************************************ 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63081 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63081 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63081 ']' 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.295 20:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.295 [2024-10-17 20:06:42.924227] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:09:57.295 [2024-10-17 20:06:42.924416] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63081 ] 00:09:57.554 [2024-10-17 20:06:43.099614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.812 [2024-10-17 20:06:43.223273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.812 [2024-10-17 20:06:43.414545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.812 [2024-10-17 20:06:43.414618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.381 malloc1 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.381 [2024-10-17 20:06:43.910574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:58.381 [2024-10-17 20:06:43.910684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.381 [2024-10-17 20:06:43.910722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:58.381 [2024-10-17 20:06:43.910738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.381 [2024-10-17 20:06:43.913777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.381 [2024-10-17 20:06:43.913827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:58.381 pt1 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.381 malloc2 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.381 [2024-10-17 20:06:43.962988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:58.381 [2024-10-17 20:06:43.963111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.381 [2024-10-17 20:06:43.963143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:58.381 [2024-10-17 20:06:43.963183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.381 [2024-10-17 20:06:43.966060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.381 [2024-10-17 20:06:43.966141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:58.381 pt2 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.381 [2024-10-17 20:06:43.971032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:58.381 [2024-10-17 20:06:43.973712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:58.381 [2024-10-17 20:06:43.973904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:58.381 [2024-10-17 20:06:43.973922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:58.381 [2024-10-17 20:06:43.974322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:58.381 [2024-10-17 20:06:43.974567] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:58.381 [2024-10-17 20:06:43.974588] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:58.381 [2024-10-17 20:06:43.974790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.381 20:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.640 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.640 "name": "raid_bdev1", 00:09:58.640 "uuid": "4999e26f-6cde-4f8a-97a0-104b1dbf0a0e", 00:09:58.640 "strip_size_kb": 0, 00:09:58.640 "state": "online", 00:09:58.640 "raid_level": "raid1", 00:09:58.640 "superblock": true, 00:09:58.640 "num_base_bdevs": 2, 00:09:58.640 "num_base_bdevs_discovered": 2, 00:09:58.640 "num_base_bdevs_operational": 2, 00:09:58.640 "base_bdevs_list": [ 00:09:58.640 { 00:09:58.640 "name": "pt1", 00:09:58.640 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:58.640 "is_configured": true, 00:09:58.640 "data_offset": 2048, 00:09:58.640 "data_size": 63488 00:09:58.640 }, 00:09:58.640 { 00:09:58.640 "name": "pt2", 00:09:58.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.640 "is_configured": true, 00:09:58.640 "data_offset": 2048, 00:09:58.640 "data_size": 63488 00:09:58.640 } 00:09:58.640 ] 00:09:58.640 }' 00:09:58.640 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.640 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.898 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:58.898 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:58.898 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:58.898 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:58.898 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:58.898 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:58.899 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:58.899 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.899 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:58.899 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.899 [2024-10-17 20:06:44.519600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.899 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.157 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.157 "name": "raid_bdev1", 00:09:59.157 "aliases": [ 00:09:59.157 "4999e26f-6cde-4f8a-97a0-104b1dbf0a0e" 00:09:59.157 ], 00:09:59.157 "product_name": "Raid Volume", 00:09:59.157 "block_size": 512, 00:09:59.157 "num_blocks": 63488, 00:09:59.157 "uuid": "4999e26f-6cde-4f8a-97a0-104b1dbf0a0e", 00:09:59.157 "assigned_rate_limits": { 00:09:59.157 "rw_ios_per_sec": 0, 00:09:59.157 "rw_mbytes_per_sec": 0, 00:09:59.157 "r_mbytes_per_sec": 0, 00:09:59.157 "w_mbytes_per_sec": 0 00:09:59.157 }, 00:09:59.157 "claimed": false, 00:09:59.157 "zoned": false, 00:09:59.157 "supported_io_types": { 00:09:59.157 "read": true, 00:09:59.157 "write": true, 00:09:59.157 "unmap": false, 00:09:59.157 "flush": false, 00:09:59.157 "reset": true, 00:09:59.157 "nvme_admin": false, 00:09:59.157 "nvme_io": false, 00:09:59.157 "nvme_io_md": false, 00:09:59.157 "write_zeroes": true, 00:09:59.157 "zcopy": false, 00:09:59.157 "get_zone_info": false, 00:09:59.157 "zone_management": false, 00:09:59.157 "zone_append": false, 00:09:59.157 "compare": false, 00:09:59.157 "compare_and_write": false, 00:09:59.157 "abort": false, 00:09:59.158 "seek_hole": false, 00:09:59.158 "seek_data": false, 00:09:59.158 "copy": false, 00:09:59.158 "nvme_iov_md": false 00:09:59.158 }, 00:09:59.158 "memory_domains": [ 00:09:59.158 { 00:09:59.158 "dma_device_id": "system", 00:09:59.158 "dma_device_type": 1 00:09:59.158 }, 00:09:59.158 { 00:09:59.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.158 "dma_device_type": 2 00:09:59.158 }, 00:09:59.158 { 00:09:59.158 "dma_device_id": "system", 00:09:59.158 "dma_device_type": 1 00:09:59.158 }, 00:09:59.158 { 00:09:59.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.158 "dma_device_type": 2 00:09:59.158 } 00:09:59.158 ], 00:09:59.158 "driver_specific": { 00:09:59.158 "raid": { 00:09:59.158 "uuid": "4999e26f-6cde-4f8a-97a0-104b1dbf0a0e", 00:09:59.158 "strip_size_kb": 0, 00:09:59.158 "state": "online", 00:09:59.158 "raid_level": "raid1", 00:09:59.158 "superblock": true, 00:09:59.158 "num_base_bdevs": 2, 00:09:59.158 "num_base_bdevs_discovered": 2, 00:09:59.158 "num_base_bdevs_operational": 2, 00:09:59.158 "base_bdevs_list": [ 00:09:59.158 { 00:09:59.158 "name": "pt1", 00:09:59.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.158 "is_configured": true, 00:09:59.158 "data_offset": 2048, 00:09:59.158 "data_size": 63488 00:09:59.158 }, 00:09:59.158 { 00:09:59.158 "name": "pt2", 00:09:59.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.158 "is_configured": true, 00:09:59.158 "data_offset": 2048, 00:09:59.158 "data_size": 63488 00:09:59.158 } 00:09:59.158 ] 00:09:59.158 } 00:09:59.158 } 00:09:59.158 }' 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:59.158 pt2' 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.158 [2024-10-17 20:06:44.783580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.158 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.417 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4999e26f-6cde-4f8a-97a0-104b1dbf0a0e 00:09:59.417 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4999e26f-6cde-4f8a-97a0-104b1dbf0a0e ']' 00:09:59.417 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:59.417 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.417 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.417 [2024-10-17 20:06:44.835316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.417 [2024-10-17 20:06:44.835349] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.417 [2024-10-17 20:06:44.835459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.418 [2024-10-17 20:06:44.835536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.418 [2024-10-17 20:06:44.835555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 [2024-10-17 20:06:44.975378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:59.418 [2024-10-17 20:06:44.978128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:59.418 [2024-10-17 20:06:44.978224] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:59.418 [2024-10-17 20:06:44.978305] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:59.418 [2024-10-17 20:06:44.978332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.418 [2024-10-17 20:06:44.978348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:59.418 request: 00:09:59.418 { 00:09:59.418 "name": "raid_bdev1", 00:09:59.418 "raid_level": "raid1", 00:09:59.418 "base_bdevs": [ 00:09:59.418 "malloc1", 00:09:59.418 "malloc2" 00:09:59.418 ], 00:09:59.418 "superblock": false, 00:09:59.418 "method": "bdev_raid_create", 00:09:59.418 "req_id": 1 00:09:59.418 } 00:09:59.418 Got JSON-RPC error response 00:09:59.418 response: 00:09:59.418 { 00:09:59.418 "code": -17, 00:09:59.418 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:59.418 } 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 20:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 [2024-10-17 20:06:45.043376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:59.418 [2024-10-17 20:06:45.043602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.418 [2024-10-17 20:06:45.043674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:59.418 [2024-10-17 20:06:45.043821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.418 [2024-10-17 20:06:45.046871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.418 [2024-10-17 20:06:45.047046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:59.418 [2024-10-17 20:06:45.047270] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:59.418 [2024-10-17 20:06:45.047466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:59.418 pt1 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.677 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.677 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.677 "name": "raid_bdev1", 00:09:59.677 "uuid": "4999e26f-6cde-4f8a-97a0-104b1dbf0a0e", 00:09:59.677 "strip_size_kb": 0, 00:09:59.677 "state": "configuring", 00:09:59.677 "raid_level": "raid1", 00:09:59.677 "superblock": true, 00:09:59.677 "num_base_bdevs": 2, 00:09:59.677 "num_base_bdevs_discovered": 1, 00:09:59.677 "num_base_bdevs_operational": 2, 00:09:59.677 "base_bdevs_list": [ 00:09:59.677 { 00:09:59.677 "name": "pt1", 00:09:59.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.677 "is_configured": true, 00:09:59.677 "data_offset": 2048, 00:09:59.677 "data_size": 63488 00:09:59.677 }, 00:09:59.677 { 00:09:59.677 "name": null, 00:09:59.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.677 "is_configured": false, 00:09:59.677 "data_offset": 2048, 00:09:59.677 "data_size": 63488 00:09:59.677 } 00:09:59.677 ] 00:09:59.677 }' 00:09:59.677 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.677 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.936 [2024-10-17 20:06:45.579525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:59.936 [2024-10-17 20:06:45.579784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.936 [2024-10-17 20:06:45.579828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:59.936 [2024-10-17 20:06:45.579847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.936 [2024-10-17 20:06:45.580566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.936 [2024-10-17 20:06:45.580602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:59.936 [2024-10-17 20:06:45.580704] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:59.936 [2024-10-17 20:06:45.580740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:59.936 [2024-10-17 20:06:45.580873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:59.936 [2024-10-17 20:06:45.580892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:59.936 [2024-10-17 20:06:45.581252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:59.936 [2024-10-17 20:06:45.581492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:59.936 [2024-10-17 20:06:45.581508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:59.936 [2024-10-17 20:06:45.581671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.936 pt2 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.936 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.195 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.195 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.195 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.195 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.195 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.195 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.195 "name": "raid_bdev1", 00:10:00.195 "uuid": "4999e26f-6cde-4f8a-97a0-104b1dbf0a0e", 00:10:00.195 "strip_size_kb": 0, 00:10:00.195 "state": "online", 00:10:00.195 "raid_level": "raid1", 00:10:00.195 "superblock": true, 00:10:00.195 "num_base_bdevs": 2, 00:10:00.195 "num_base_bdevs_discovered": 2, 00:10:00.195 "num_base_bdevs_operational": 2, 00:10:00.195 "base_bdevs_list": [ 00:10:00.195 { 00:10:00.195 "name": "pt1", 00:10:00.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.195 "is_configured": true, 00:10:00.195 "data_offset": 2048, 00:10:00.195 "data_size": 63488 00:10:00.195 }, 00:10:00.195 { 00:10:00.195 "name": "pt2", 00:10:00.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.195 "is_configured": true, 00:10:00.195 "data_offset": 2048, 00:10:00.195 "data_size": 63488 00:10:00.195 } 00:10:00.195 ] 00:10:00.195 }' 00:10:00.195 20:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.195 20:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.454 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.713 [2024-10-17 20:06:46.115974] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:00.713 "name": "raid_bdev1", 00:10:00.713 "aliases": [ 00:10:00.713 "4999e26f-6cde-4f8a-97a0-104b1dbf0a0e" 00:10:00.713 ], 00:10:00.713 "product_name": "Raid Volume", 00:10:00.713 "block_size": 512, 00:10:00.713 "num_blocks": 63488, 00:10:00.713 "uuid": "4999e26f-6cde-4f8a-97a0-104b1dbf0a0e", 00:10:00.713 "assigned_rate_limits": { 00:10:00.713 "rw_ios_per_sec": 0, 00:10:00.713 "rw_mbytes_per_sec": 0, 00:10:00.713 "r_mbytes_per_sec": 0, 00:10:00.713 "w_mbytes_per_sec": 0 00:10:00.713 }, 00:10:00.713 "claimed": false, 00:10:00.713 "zoned": false, 00:10:00.713 "supported_io_types": { 00:10:00.713 "read": true, 00:10:00.713 "write": true, 00:10:00.713 "unmap": false, 00:10:00.713 "flush": false, 00:10:00.713 "reset": true, 00:10:00.713 "nvme_admin": false, 00:10:00.713 "nvme_io": false, 00:10:00.713 "nvme_io_md": false, 00:10:00.713 "write_zeroes": true, 00:10:00.713 "zcopy": false, 00:10:00.713 "get_zone_info": false, 00:10:00.713 "zone_management": false, 00:10:00.713 "zone_append": false, 00:10:00.713 "compare": false, 00:10:00.713 "compare_and_write": false, 00:10:00.713 "abort": false, 00:10:00.713 "seek_hole": false, 00:10:00.713 "seek_data": false, 00:10:00.713 "copy": false, 00:10:00.713 "nvme_iov_md": false 00:10:00.713 }, 00:10:00.713 "memory_domains": [ 00:10:00.713 { 00:10:00.713 "dma_device_id": "system", 00:10:00.713 "dma_device_type": 1 00:10:00.713 }, 00:10:00.713 { 00:10:00.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.713 "dma_device_type": 2 00:10:00.713 }, 00:10:00.713 { 00:10:00.713 "dma_device_id": "system", 00:10:00.713 "dma_device_type": 1 00:10:00.713 }, 00:10:00.713 { 00:10:00.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.713 "dma_device_type": 2 00:10:00.713 } 00:10:00.713 ], 00:10:00.713 "driver_specific": { 00:10:00.713 "raid": { 00:10:00.713 "uuid": "4999e26f-6cde-4f8a-97a0-104b1dbf0a0e", 00:10:00.713 "strip_size_kb": 0, 00:10:00.713 "state": "online", 00:10:00.713 "raid_level": "raid1", 00:10:00.713 "superblock": true, 00:10:00.713 "num_base_bdevs": 2, 00:10:00.713 "num_base_bdevs_discovered": 2, 00:10:00.713 "num_base_bdevs_operational": 2, 00:10:00.713 "base_bdevs_list": [ 00:10:00.713 { 00:10:00.713 "name": "pt1", 00:10:00.713 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.713 "is_configured": true, 00:10:00.713 "data_offset": 2048, 00:10:00.713 "data_size": 63488 00:10:00.713 }, 00:10:00.713 { 00:10:00.713 "name": "pt2", 00:10:00.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.713 "is_configured": true, 00:10:00.713 "data_offset": 2048, 00:10:00.713 "data_size": 63488 00:10:00.713 } 00:10:00.713 ] 00:10:00.713 } 00:10:00.713 } 00:10:00.713 }' 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:00.713 pt2' 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.713 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:00.972 [2024-10-17 20:06:46.408139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4999e26f-6cde-4f8a-97a0-104b1dbf0a0e '!=' 4999e26f-6cde-4f8a-97a0-104b1dbf0a0e ']' 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.972 [2024-10-17 20:06:46.459780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.972 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.973 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.973 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.973 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.973 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.973 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.973 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.973 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.973 "name": "raid_bdev1", 00:10:00.973 "uuid": "4999e26f-6cde-4f8a-97a0-104b1dbf0a0e", 00:10:00.973 "strip_size_kb": 0, 00:10:00.973 "state": "online", 00:10:00.973 "raid_level": "raid1", 00:10:00.973 "superblock": true, 00:10:00.973 "num_base_bdevs": 2, 00:10:00.973 "num_base_bdevs_discovered": 1, 00:10:00.973 "num_base_bdevs_operational": 1, 00:10:00.973 "base_bdevs_list": [ 00:10:00.973 { 00:10:00.973 "name": null, 00:10:00.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.973 "is_configured": false, 00:10:00.973 "data_offset": 0, 00:10:00.973 "data_size": 63488 00:10:00.973 }, 00:10:00.973 { 00:10:00.973 "name": "pt2", 00:10:00.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.973 "is_configured": true, 00:10:00.973 "data_offset": 2048, 00:10:00.973 "data_size": 63488 00:10:00.973 } 00:10:00.973 ] 00:10:00.973 }' 00:10:00.973 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.973 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.597 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:01.597 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.597 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.597 [2024-10-17 20:06:46.979957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.597 [2024-10-17 20:06:46.980197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.597 [2024-10-17 20:06:46.980320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.598 [2024-10-17 20:06:46.980386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.598 [2024-10-17 20:06:46.980406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:01.598 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.598 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.598 20:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:01.598 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.598 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.598 20:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.598 [2024-10-17 20:06:47.051937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:01.598 [2024-10-17 20:06:47.052058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.598 [2024-10-17 20:06:47.052084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:01.598 [2024-10-17 20:06:47.052101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.598 [2024-10-17 20:06:47.055642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.598 [2024-10-17 20:06:47.055702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:01.598 [2024-10-17 20:06:47.055791] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:01.598 [2024-10-17 20:06:47.055862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:01.598 [2024-10-17 20:06:47.056033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:01.598 [2024-10-17 20:06:47.056062] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:01.598 [2024-10-17 20:06:47.056378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:01.598 [2024-10-17 20:06:47.056586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:01.598 [2024-10-17 20:06:47.056602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:01.598 [2024-10-17 20:06:47.056845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.598 pt2 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.598 "name": "raid_bdev1", 00:10:01.598 "uuid": "4999e26f-6cde-4f8a-97a0-104b1dbf0a0e", 00:10:01.598 "strip_size_kb": 0, 00:10:01.598 "state": "online", 00:10:01.598 "raid_level": "raid1", 00:10:01.598 "superblock": true, 00:10:01.598 "num_base_bdevs": 2, 00:10:01.598 "num_base_bdevs_discovered": 1, 00:10:01.598 "num_base_bdevs_operational": 1, 00:10:01.598 "base_bdevs_list": [ 00:10:01.598 { 00:10:01.598 "name": null, 00:10:01.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.598 "is_configured": false, 00:10:01.598 "data_offset": 2048, 00:10:01.598 "data_size": 63488 00:10:01.598 }, 00:10:01.598 { 00:10:01.598 "name": "pt2", 00:10:01.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.598 "is_configured": true, 00:10:01.598 "data_offset": 2048, 00:10:01.598 "data_size": 63488 00:10:01.598 } 00:10:01.598 ] 00:10:01.598 }' 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.598 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.165 [2024-10-17 20:06:47.580290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.165 [2024-10-17 20:06:47.580327] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.165 [2024-10-17 20:06:47.580411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.165 [2024-10-17 20:06:47.580538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.165 [2024-10-17 20:06:47.580567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.165 [2024-10-17 20:06:47.644363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:02.165 [2024-10-17 20:06:47.644581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.165 [2024-10-17 20:06:47.644653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:02.165 [2024-10-17 20:06:47.644772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.165 [2024-10-17 20:06:47.647807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.165 [2024-10-17 20:06:47.648030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:02.165 [2024-10-17 20:06:47.648269] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:02.165 [2024-10-17 20:06:47.648430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:02.165 [2024-10-17 20:06:47.648721] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:02.165 [2024-10-17 20:06:47.648861] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.165 [2024-10-17 20:06:47.648981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:02.165 [2024-10-17 20:06:47.649216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.165 [2024-10-17 20:06:47.649490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:02.165 [2024-10-17 20:06:47.649514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:02.165 [2024-10-17 20:06:47.649835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:02.165 pt1 00:10:02.165 [2024-10-17 20:06:47.650060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:02.165 [2024-10-17 20:06:47.650081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:02.165 [2024-10-17 20:06:47.650273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.165 "name": "raid_bdev1", 00:10:02.165 "uuid": "4999e26f-6cde-4f8a-97a0-104b1dbf0a0e", 00:10:02.165 "strip_size_kb": 0, 00:10:02.165 "state": "online", 00:10:02.165 "raid_level": "raid1", 00:10:02.165 "superblock": true, 00:10:02.165 "num_base_bdevs": 2, 00:10:02.165 "num_base_bdevs_discovered": 1, 00:10:02.165 "num_base_bdevs_operational": 1, 00:10:02.165 "base_bdevs_list": [ 00:10:02.165 { 00:10:02.165 "name": null, 00:10:02.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.165 "is_configured": false, 00:10:02.165 "data_offset": 2048, 00:10:02.165 "data_size": 63488 00:10:02.165 }, 00:10:02.165 { 00:10:02.165 "name": "pt2", 00:10:02.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.165 "is_configured": true, 00:10:02.165 "data_offset": 2048, 00:10:02.165 "data_size": 63488 00:10:02.165 } 00:10:02.165 ] 00:10:02.165 }' 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.165 20:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.731 20:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.732 [2024-10-17 20:06:48.225259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4999e26f-6cde-4f8a-97a0-104b1dbf0a0e '!=' 4999e26f-6cde-4f8a-97a0-104b1dbf0a0e ']' 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63081 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63081 ']' 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63081 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63081 00:10:02.732 killing process with pid 63081 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63081' 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63081 00:10:02.732 [2024-10-17 20:06:48.300636] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.732 20:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63081 00:10:02.732 [2024-10-17 20:06:48.300751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.732 [2024-10-17 20:06:48.300828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.732 [2024-10-17 20:06:48.300850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:02.990 [2024-10-17 20:06:48.496970] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.925 ************************************ 00:10:03.925 END TEST raid_superblock_test 00:10:03.925 ************************************ 00:10:03.925 20:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:03.925 00:10:03.925 real 0m6.697s 00:10:03.925 user 0m10.638s 00:10:03.925 sys 0m0.959s 00:10:03.925 20:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.925 20:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.925 20:06:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:10:03.925 20:06:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:03.925 20:06:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.925 20:06:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.925 ************************************ 00:10:03.925 START TEST raid_read_error_test 00:10:03.925 ************************************ 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fQUylgUl2M 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63417 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:03.925 20:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63417 00:10:03.926 20:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63417 ']' 00:10:03.926 20:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.184 20:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.184 20:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.184 20:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.184 20:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.184 [2024-10-17 20:06:49.684336] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:10:04.184 [2024-10-17 20:06:49.684536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63417 ] 00:10:04.442 [2024-10-17 20:06:49.860738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.442 [2024-10-17 20:06:49.988051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.700 [2024-10-17 20:06:50.188621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.700 [2024-10-17 20:06:50.188941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.268 BaseBdev1_malloc 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.268 true 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.268 [2024-10-17 20:06:50.717837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:05.268 [2024-10-17 20:06:50.717916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.268 [2024-10-17 20:06:50.717945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:05.268 [2024-10-17 20:06:50.717979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.268 [2024-10-17 20:06:50.720894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.268 [2024-10-17 20:06:50.720959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:05.268 BaseBdev1 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.268 BaseBdev2_malloc 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.268 true 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.268 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.269 [2024-10-17 20:06:50.771028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:05.269 [2024-10-17 20:06:50.771104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.269 [2024-10-17 20:06:50.771130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:05.269 [2024-10-17 20:06:50.771146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.269 [2024-10-17 20:06:50.773898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.269 [2024-10-17 20:06:50.773945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:05.269 BaseBdev2 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.269 [2024-10-17 20:06:50.779128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.269 [2024-10-17 20:06:50.781642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.269 [2024-10-17 20:06:50.781900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:05.269 [2024-10-17 20:06:50.781922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:05.269 [2024-10-17 20:06:50.782224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:05.269 [2024-10-17 20:06:50.782455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:05.269 [2024-10-17 20:06:50.782472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:05.269 [2024-10-17 20:06:50.782645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.269 "name": "raid_bdev1", 00:10:05.269 "uuid": "f67f8e9c-dbfd-4cb2-9d25-05777263a74d", 00:10:05.269 "strip_size_kb": 0, 00:10:05.269 "state": "online", 00:10:05.269 "raid_level": "raid1", 00:10:05.269 "superblock": true, 00:10:05.269 "num_base_bdevs": 2, 00:10:05.269 "num_base_bdevs_discovered": 2, 00:10:05.269 "num_base_bdevs_operational": 2, 00:10:05.269 "base_bdevs_list": [ 00:10:05.269 { 00:10:05.269 "name": "BaseBdev1", 00:10:05.269 "uuid": "2ffe2a94-7309-5c9c-b7ca-284fd1e6bf63", 00:10:05.269 "is_configured": true, 00:10:05.269 "data_offset": 2048, 00:10:05.269 "data_size": 63488 00:10:05.269 }, 00:10:05.269 { 00:10:05.269 "name": "BaseBdev2", 00:10:05.269 "uuid": "160772a0-a97b-538f-a8f7-f94909dff93d", 00:10:05.269 "is_configured": true, 00:10:05.269 "data_offset": 2048, 00:10:05.269 "data_size": 63488 00:10:05.269 } 00:10:05.269 ] 00:10:05.269 }' 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.269 20:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 20:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:05.836 20:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:05.836 [2024-10-17 20:06:51.449128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.771 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.771 "name": "raid_bdev1", 00:10:06.771 "uuid": "f67f8e9c-dbfd-4cb2-9d25-05777263a74d", 00:10:06.771 "strip_size_kb": 0, 00:10:06.771 "state": "online", 00:10:06.771 "raid_level": "raid1", 00:10:06.771 "superblock": true, 00:10:06.771 "num_base_bdevs": 2, 00:10:06.771 "num_base_bdevs_discovered": 2, 00:10:06.771 "num_base_bdevs_operational": 2, 00:10:06.772 "base_bdevs_list": [ 00:10:06.772 { 00:10:06.772 "name": "BaseBdev1", 00:10:06.772 "uuid": "2ffe2a94-7309-5c9c-b7ca-284fd1e6bf63", 00:10:06.772 "is_configured": true, 00:10:06.772 "data_offset": 2048, 00:10:06.772 "data_size": 63488 00:10:06.772 }, 00:10:06.772 { 00:10:06.772 "name": "BaseBdev2", 00:10:06.772 "uuid": "160772a0-a97b-538f-a8f7-f94909dff93d", 00:10:06.772 "is_configured": true, 00:10:06.772 "data_offset": 2048, 00:10:06.772 "data_size": 63488 00:10:06.772 } 00:10:06.772 ] 00:10:06.772 }' 00:10:06.772 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.772 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.339 [2024-10-17 20:06:52.892722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.339 [2024-10-17 20:06:52.892939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.339 [2024-10-17 20:06:52.896322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.339 [2024-10-17 20:06:52.896383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.339 [2024-10-17 20:06:52.896515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.339 [2024-10-17 20:06:52.896536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:07.339 { 00:10:07.339 "results": [ 00:10:07.339 { 00:10:07.339 "job": "raid_bdev1", 00:10:07.339 "core_mask": "0x1", 00:10:07.339 "workload": "randrw", 00:10:07.339 "percentage": 50, 00:10:07.339 "status": "finished", 00:10:07.339 "queue_depth": 1, 00:10:07.339 "io_size": 131072, 00:10:07.339 "runtime": 1.440602, 00:10:07.339 "iops": 12162.970758058089, 00:10:07.339 "mibps": 1520.371344757261, 00:10:07.339 "io_failed": 0, 00:10:07.339 "io_timeout": 0, 00:10:07.339 "avg_latency_us": 78.07718421516846, 00:10:07.339 "min_latency_us": 39.56363636363636, 00:10:07.339 "max_latency_us": 1817.1345454545456 00:10:07.339 } 00:10:07.339 ], 00:10:07.339 "core_count": 1 00:10:07.339 } 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63417 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63417 ']' 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63417 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63417 00:10:07.339 killing process with pid 63417 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63417' 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63417 00:10:07.339 [2024-10-17 20:06:52.932516] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.339 20:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63417 00:10:07.598 [2024-10-17 20:06:53.041592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.534 20:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:08.534 20:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fQUylgUl2M 00:10:08.534 20:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:08.534 20:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:08.534 20:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:08.534 20:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.534 20:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:08.534 20:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:08.534 00:10:08.534 real 0m4.491s 00:10:08.534 user 0m5.719s 00:10:08.534 sys 0m0.542s 00:10:08.534 ************************************ 00:10:08.534 END TEST raid_read_error_test 00:10:08.534 ************************************ 00:10:08.534 20:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.534 20:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.534 20:06:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:10:08.534 20:06:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:08.534 20:06:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.534 20:06:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.534 ************************************ 00:10:08.534 START TEST raid_write_error_test 00:10:08.534 ************************************ 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OqzeXM2w0J 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63557 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63557 00:10:08.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63557 ']' 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.534 20:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.793 [2024-10-17 20:06:54.242943] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:10:08.793 [2024-10-17 20:06:54.243186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63557 ] 00:10:08.793 [2024-10-17 20:06:54.421561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.052 [2024-10-17 20:06:54.563637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.310 [2024-10-17 20:06:54.778890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.310 [2024-10-17 20:06:54.779240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.901 BaseBdev1_malloc 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.901 true 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.901 [2024-10-17 20:06:55.286742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:09.901 [2024-10-17 20:06:55.286823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.901 [2024-10-17 20:06:55.286852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:09.901 [2024-10-17 20:06:55.286869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.901 [2024-10-17 20:06:55.289786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.901 [2024-10-17 20:06:55.289838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:09.901 BaseBdev1 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.901 BaseBdev2_malloc 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.901 true 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.901 [2024-10-17 20:06:55.342300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:09.901 [2024-10-17 20:06:55.342398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.901 [2024-10-17 20:06:55.342425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:09.901 [2024-10-17 20:06:55.342442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.901 [2024-10-17 20:06:55.345432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.901 [2024-10-17 20:06:55.345500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:09.901 BaseBdev2 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.901 [2024-10-17 20:06:55.354493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.901 [2024-10-17 20:06:55.357263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.901 [2024-10-17 20:06:55.357683] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:09.901 [2024-10-17 20:06:55.357864] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:09.901 [2024-10-17 20:06:55.358289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:09.901 [2024-10-17 20:06:55.358682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:09.901 [2024-10-17 20:06:55.358808] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:09.901 [2024-10-17 20:06:55.359112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.901 "name": "raid_bdev1", 00:10:09.901 "uuid": "28149b12-4cb2-454e-9a8f-8858a33009a5", 00:10:09.901 "strip_size_kb": 0, 00:10:09.901 "state": "online", 00:10:09.901 "raid_level": "raid1", 00:10:09.901 "superblock": true, 00:10:09.901 "num_base_bdevs": 2, 00:10:09.901 "num_base_bdevs_discovered": 2, 00:10:09.901 "num_base_bdevs_operational": 2, 00:10:09.901 "base_bdevs_list": [ 00:10:09.901 { 00:10:09.901 "name": "BaseBdev1", 00:10:09.901 "uuid": "26a352e0-82a2-5374-b4c7-efda8c3c79ad", 00:10:09.901 "is_configured": true, 00:10:09.901 "data_offset": 2048, 00:10:09.901 "data_size": 63488 00:10:09.901 }, 00:10:09.901 { 00:10:09.901 "name": "BaseBdev2", 00:10:09.901 "uuid": "9d372779-09a2-5ff9-a468-8233e4612085", 00:10:09.901 "is_configured": true, 00:10:09.901 "data_offset": 2048, 00:10:09.901 "data_size": 63488 00:10:09.901 } 00:10:09.901 ] 00:10:09.901 }' 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.901 20:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.469 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:10.469 20:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:10.469 [2024-10-17 20:06:56.032652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.405 [2024-10-17 20:06:56.910074] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:11.405 [2024-10-17 20:06:56.910179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.405 [2024-10-17 20:06:56.910467] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.405 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.405 "name": "raid_bdev1", 00:10:11.405 "uuid": "28149b12-4cb2-454e-9a8f-8858a33009a5", 00:10:11.405 "strip_size_kb": 0, 00:10:11.405 "state": "online", 00:10:11.405 "raid_level": "raid1", 00:10:11.405 "superblock": true, 00:10:11.405 "num_base_bdevs": 2, 00:10:11.405 "num_base_bdevs_discovered": 1, 00:10:11.406 "num_base_bdevs_operational": 1, 00:10:11.406 "base_bdevs_list": [ 00:10:11.406 { 00:10:11.406 "name": null, 00:10:11.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.406 "is_configured": false, 00:10:11.406 "data_offset": 0, 00:10:11.406 "data_size": 63488 00:10:11.406 }, 00:10:11.406 { 00:10:11.406 "name": "BaseBdev2", 00:10:11.406 "uuid": "9d372779-09a2-5ff9-a468-8233e4612085", 00:10:11.406 "is_configured": true, 00:10:11.406 "data_offset": 2048, 00:10:11.406 "data_size": 63488 00:10:11.406 } 00:10:11.406 ] 00:10:11.406 }' 00:10:11.406 20:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.406 20:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.974 [2024-10-17 20:06:57.453018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.974 [2024-10-17 20:06:57.453189] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.974 [2024-10-17 20:06:57.456778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.974 [2024-10-17 20:06:57.456963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.974 [2024-10-17 20:06:57.457282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:10:11.974 "results": [ 00:10:11.974 { 00:10:11.974 "job": "raid_bdev1", 00:10:11.974 "core_mask": "0x1", 00:10:11.974 "workload": "randrw", 00:10:11.974 "percentage": 50, 00:10:11.974 "status": "finished", 00:10:11.974 "queue_depth": 1, 00:10:11.974 "io_size": 131072, 00:10:11.974 "runtime": 1.418037, 00:10:11.974 "iops": 15065.897434270051, 00:10:11.974 "mibps": 1883.2371792837564, 00:10:11.974 "io_failed": 0, 00:10:11.974 "io_timeout": 0, 00:10:11.974 "avg_latency_us": 62.413431601164234, 00:10:11.974 "min_latency_us": 37.93454545454546, 00:10:11.974 "max_latency_us": 1787.3454545454545 00:10:11.974 } 00:10:11.974 ], 00:10:11.974 "core_count": 1 00:10:11.974 } 00:10:11.974 ee all in destruct 00:10:11.974 [2024-10-17 20:06:57.457431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63557 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63557 ']' 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63557 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63557 00:10:11.974 killing process with pid 63557 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63557' 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63557 00:10:11.974 [2024-10-17 20:06:57.497064] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.974 20:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63557 00:10:12.233 [2024-10-17 20:06:57.625502] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.173 20:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OqzeXM2w0J 00:10:13.173 20:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:13.173 20:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:13.173 20:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:13.174 20:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:13.174 20:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.174 20:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:13.174 20:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:13.174 00:10:13.174 real 0m4.616s 00:10:13.174 user 0m5.844s 00:10:13.174 sys 0m0.553s 00:10:13.174 ************************************ 00:10:13.174 END TEST raid_write_error_test 00:10:13.174 ************************************ 00:10:13.174 20:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.174 20:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.174 20:06:58 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:13.174 20:06:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:13.174 20:06:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:13.174 20:06:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:13.174 20:06:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.174 20:06:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.174 ************************************ 00:10:13.174 START TEST raid_state_function_test 00:10:13.174 ************************************ 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:13.174 Process raid pid: 63706 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63706 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63706' 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63706 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63706 ']' 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.174 20:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.433 [2024-10-17 20:06:58.885821] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:10:13.433 [2024-10-17 20:06:58.886279] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.433 [2024-10-17 20:06:59.049018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.691 [2024-10-17 20:06:59.178084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.951 [2024-10-17 20:06:59.384872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.951 [2024-10-17 20:06:59.385191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.519 [2024-10-17 20:06:59.988144] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.519 [2024-10-17 20:06:59.988239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.519 [2024-10-17 20:06:59.988257] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.519 [2024-10-17 20:06:59.988273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.519 [2024-10-17 20:06:59.988284] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.519 [2024-10-17 20:06:59.988298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.519 20:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.519 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.519 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.519 "name": "Existed_Raid", 00:10:14.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.519 "strip_size_kb": 64, 00:10:14.519 "state": "configuring", 00:10:14.519 "raid_level": "raid0", 00:10:14.519 "superblock": false, 00:10:14.519 "num_base_bdevs": 3, 00:10:14.519 "num_base_bdevs_discovered": 0, 00:10:14.519 "num_base_bdevs_operational": 3, 00:10:14.519 "base_bdevs_list": [ 00:10:14.519 { 00:10:14.519 "name": "BaseBdev1", 00:10:14.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.519 "is_configured": false, 00:10:14.519 "data_offset": 0, 00:10:14.519 "data_size": 0 00:10:14.519 }, 00:10:14.519 { 00:10:14.519 "name": "BaseBdev2", 00:10:14.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.519 "is_configured": false, 00:10:14.519 "data_offset": 0, 00:10:14.519 "data_size": 0 00:10:14.519 }, 00:10:14.519 { 00:10:14.519 "name": "BaseBdev3", 00:10:14.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.519 "is_configured": false, 00:10:14.519 "data_offset": 0, 00:10:14.519 "data_size": 0 00:10:14.519 } 00:10:14.519 ] 00:10:14.519 }' 00:10:14.519 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.519 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.086 [2024-10-17 20:07:00.524257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.086 [2024-10-17 20:07:00.524309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.086 [2024-10-17 20:07:00.532298] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.086 [2024-10-17 20:07:00.532362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.086 [2024-10-17 20:07:00.532378] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.086 [2024-10-17 20:07:00.532395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.086 [2024-10-17 20:07:00.532405] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.086 [2024-10-17 20:07:00.532419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.086 [2024-10-17 20:07:00.578768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.086 BaseBdev1 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.086 [ 00:10:15.086 { 00:10:15.086 "name": "BaseBdev1", 00:10:15.086 "aliases": [ 00:10:15.086 "b0b3c545-7020-4833-9944-7ae9d5b947f8" 00:10:15.086 ], 00:10:15.086 "product_name": "Malloc disk", 00:10:15.086 "block_size": 512, 00:10:15.086 "num_blocks": 65536, 00:10:15.086 "uuid": "b0b3c545-7020-4833-9944-7ae9d5b947f8", 00:10:15.086 "assigned_rate_limits": { 00:10:15.086 "rw_ios_per_sec": 0, 00:10:15.086 "rw_mbytes_per_sec": 0, 00:10:15.086 "r_mbytes_per_sec": 0, 00:10:15.086 "w_mbytes_per_sec": 0 00:10:15.086 }, 00:10:15.086 "claimed": true, 00:10:15.086 "claim_type": "exclusive_write", 00:10:15.086 "zoned": false, 00:10:15.086 "supported_io_types": { 00:10:15.086 "read": true, 00:10:15.086 "write": true, 00:10:15.086 "unmap": true, 00:10:15.086 "flush": true, 00:10:15.086 "reset": true, 00:10:15.086 "nvme_admin": false, 00:10:15.086 "nvme_io": false, 00:10:15.086 "nvme_io_md": false, 00:10:15.086 "write_zeroes": true, 00:10:15.086 "zcopy": true, 00:10:15.086 "get_zone_info": false, 00:10:15.086 "zone_management": false, 00:10:15.086 "zone_append": false, 00:10:15.086 "compare": false, 00:10:15.086 "compare_and_write": false, 00:10:15.086 "abort": true, 00:10:15.086 "seek_hole": false, 00:10:15.086 "seek_data": false, 00:10:15.086 "copy": true, 00:10:15.086 "nvme_iov_md": false 00:10:15.086 }, 00:10:15.086 "memory_domains": [ 00:10:15.086 { 00:10:15.086 "dma_device_id": "system", 00:10:15.086 "dma_device_type": 1 00:10:15.086 }, 00:10:15.086 { 00:10:15.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.086 "dma_device_type": 2 00:10:15.086 } 00:10:15.086 ], 00:10:15.086 "driver_specific": {} 00:10:15.086 } 00:10:15.086 ] 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.086 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.087 "name": "Existed_Raid", 00:10:15.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.087 "strip_size_kb": 64, 00:10:15.087 "state": "configuring", 00:10:15.087 "raid_level": "raid0", 00:10:15.087 "superblock": false, 00:10:15.087 "num_base_bdevs": 3, 00:10:15.087 "num_base_bdevs_discovered": 1, 00:10:15.087 "num_base_bdevs_operational": 3, 00:10:15.087 "base_bdevs_list": [ 00:10:15.087 { 00:10:15.087 "name": "BaseBdev1", 00:10:15.087 "uuid": "b0b3c545-7020-4833-9944-7ae9d5b947f8", 00:10:15.087 "is_configured": true, 00:10:15.087 "data_offset": 0, 00:10:15.087 "data_size": 65536 00:10:15.087 }, 00:10:15.087 { 00:10:15.087 "name": "BaseBdev2", 00:10:15.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.087 "is_configured": false, 00:10:15.087 "data_offset": 0, 00:10:15.087 "data_size": 0 00:10:15.087 }, 00:10:15.087 { 00:10:15.087 "name": "BaseBdev3", 00:10:15.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.087 "is_configured": false, 00:10:15.087 "data_offset": 0, 00:10:15.087 "data_size": 0 00:10:15.087 } 00:10:15.087 ] 00:10:15.087 }' 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.087 20:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.654 [2024-10-17 20:07:01.114946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.654 [2024-10-17 20:07:01.115023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.654 [2024-10-17 20:07:01.122986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.654 [2024-10-17 20:07:01.125602] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.654 [2024-10-17 20:07:01.125668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.654 [2024-10-17 20:07:01.125683] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.654 [2024-10-17 20:07:01.125698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.654 "name": "Existed_Raid", 00:10:15.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.654 "strip_size_kb": 64, 00:10:15.654 "state": "configuring", 00:10:15.654 "raid_level": "raid0", 00:10:15.654 "superblock": false, 00:10:15.654 "num_base_bdevs": 3, 00:10:15.654 "num_base_bdevs_discovered": 1, 00:10:15.654 "num_base_bdevs_operational": 3, 00:10:15.654 "base_bdevs_list": [ 00:10:15.654 { 00:10:15.654 "name": "BaseBdev1", 00:10:15.654 "uuid": "b0b3c545-7020-4833-9944-7ae9d5b947f8", 00:10:15.654 "is_configured": true, 00:10:15.654 "data_offset": 0, 00:10:15.654 "data_size": 65536 00:10:15.654 }, 00:10:15.654 { 00:10:15.654 "name": "BaseBdev2", 00:10:15.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.654 "is_configured": false, 00:10:15.654 "data_offset": 0, 00:10:15.654 "data_size": 0 00:10:15.654 }, 00:10:15.654 { 00:10:15.654 "name": "BaseBdev3", 00:10:15.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.654 "is_configured": false, 00:10:15.654 "data_offset": 0, 00:10:15.654 "data_size": 0 00:10:15.654 } 00:10:15.654 ] 00:10:15.654 }' 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.654 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.222 [2024-10-17 20:07:01.682608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.222 BaseBdev2 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.222 [ 00:10:16.222 { 00:10:16.222 "name": "BaseBdev2", 00:10:16.222 "aliases": [ 00:10:16.222 "db6b2e5c-e7c7-43d6-84c3-ad79abd49315" 00:10:16.222 ], 00:10:16.222 "product_name": "Malloc disk", 00:10:16.222 "block_size": 512, 00:10:16.222 "num_blocks": 65536, 00:10:16.222 "uuid": "db6b2e5c-e7c7-43d6-84c3-ad79abd49315", 00:10:16.222 "assigned_rate_limits": { 00:10:16.222 "rw_ios_per_sec": 0, 00:10:16.222 "rw_mbytes_per_sec": 0, 00:10:16.222 "r_mbytes_per_sec": 0, 00:10:16.222 "w_mbytes_per_sec": 0 00:10:16.222 }, 00:10:16.222 "claimed": true, 00:10:16.222 "claim_type": "exclusive_write", 00:10:16.222 "zoned": false, 00:10:16.222 "supported_io_types": { 00:10:16.222 "read": true, 00:10:16.222 "write": true, 00:10:16.222 "unmap": true, 00:10:16.222 "flush": true, 00:10:16.222 "reset": true, 00:10:16.222 "nvme_admin": false, 00:10:16.222 "nvme_io": false, 00:10:16.222 "nvme_io_md": false, 00:10:16.222 "write_zeroes": true, 00:10:16.222 "zcopy": true, 00:10:16.222 "get_zone_info": false, 00:10:16.222 "zone_management": false, 00:10:16.222 "zone_append": false, 00:10:16.222 "compare": false, 00:10:16.222 "compare_and_write": false, 00:10:16.222 "abort": true, 00:10:16.222 "seek_hole": false, 00:10:16.222 "seek_data": false, 00:10:16.222 "copy": true, 00:10:16.222 "nvme_iov_md": false 00:10:16.222 }, 00:10:16.222 "memory_domains": [ 00:10:16.222 { 00:10:16.222 "dma_device_id": "system", 00:10:16.222 "dma_device_type": 1 00:10:16.222 }, 00:10:16.222 { 00:10:16.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.222 "dma_device_type": 2 00:10:16.222 } 00:10:16.222 ], 00:10:16.222 "driver_specific": {} 00:10:16.222 } 00:10:16.222 ] 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.222 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.223 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.223 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.223 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.223 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.223 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.223 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.223 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.223 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.223 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.223 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.223 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.223 "name": "Existed_Raid", 00:10:16.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.223 "strip_size_kb": 64, 00:10:16.223 "state": "configuring", 00:10:16.223 "raid_level": "raid0", 00:10:16.223 "superblock": false, 00:10:16.223 "num_base_bdevs": 3, 00:10:16.223 "num_base_bdevs_discovered": 2, 00:10:16.223 "num_base_bdevs_operational": 3, 00:10:16.223 "base_bdevs_list": [ 00:10:16.223 { 00:10:16.223 "name": "BaseBdev1", 00:10:16.223 "uuid": "b0b3c545-7020-4833-9944-7ae9d5b947f8", 00:10:16.223 "is_configured": true, 00:10:16.223 "data_offset": 0, 00:10:16.223 "data_size": 65536 00:10:16.223 }, 00:10:16.223 { 00:10:16.223 "name": "BaseBdev2", 00:10:16.223 "uuid": "db6b2e5c-e7c7-43d6-84c3-ad79abd49315", 00:10:16.223 "is_configured": true, 00:10:16.223 "data_offset": 0, 00:10:16.223 "data_size": 65536 00:10:16.223 }, 00:10:16.223 { 00:10:16.223 "name": "BaseBdev3", 00:10:16.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.223 "is_configured": false, 00:10:16.223 "data_offset": 0, 00:10:16.223 "data_size": 0 00:10:16.223 } 00:10:16.223 ] 00:10:16.223 }' 00:10:16.223 20:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.223 20:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.790 [2024-10-17 20:07:02.304599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.790 [2024-10-17 20:07:02.304666] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:16.790 [2024-10-17 20:07:02.304687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:16.790 [2024-10-17 20:07:02.305084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:16.790 [2024-10-17 20:07:02.305399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:16.790 [2024-10-17 20:07:02.305416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:16.790 [2024-10-17 20:07:02.305747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.790 BaseBdev3 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.790 [ 00:10:16.790 { 00:10:16.790 "name": "BaseBdev3", 00:10:16.790 "aliases": [ 00:10:16.790 "dcd4f8dc-c450-4dbf-8cf1-8d431c859a3d" 00:10:16.790 ], 00:10:16.790 "product_name": "Malloc disk", 00:10:16.790 "block_size": 512, 00:10:16.790 "num_blocks": 65536, 00:10:16.790 "uuid": "dcd4f8dc-c450-4dbf-8cf1-8d431c859a3d", 00:10:16.790 "assigned_rate_limits": { 00:10:16.790 "rw_ios_per_sec": 0, 00:10:16.790 "rw_mbytes_per_sec": 0, 00:10:16.790 "r_mbytes_per_sec": 0, 00:10:16.790 "w_mbytes_per_sec": 0 00:10:16.790 }, 00:10:16.790 "claimed": true, 00:10:16.790 "claim_type": "exclusive_write", 00:10:16.790 "zoned": false, 00:10:16.790 "supported_io_types": { 00:10:16.790 "read": true, 00:10:16.790 "write": true, 00:10:16.790 "unmap": true, 00:10:16.790 "flush": true, 00:10:16.790 "reset": true, 00:10:16.790 "nvme_admin": false, 00:10:16.790 "nvme_io": false, 00:10:16.790 "nvme_io_md": false, 00:10:16.790 "write_zeroes": true, 00:10:16.790 "zcopy": true, 00:10:16.790 "get_zone_info": false, 00:10:16.790 "zone_management": false, 00:10:16.790 "zone_append": false, 00:10:16.790 "compare": false, 00:10:16.790 "compare_and_write": false, 00:10:16.790 "abort": true, 00:10:16.790 "seek_hole": false, 00:10:16.790 "seek_data": false, 00:10:16.790 "copy": true, 00:10:16.790 "nvme_iov_md": false 00:10:16.790 }, 00:10:16.790 "memory_domains": [ 00:10:16.790 { 00:10:16.790 "dma_device_id": "system", 00:10:16.790 "dma_device_type": 1 00:10:16.790 }, 00:10:16.790 { 00:10:16.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.790 "dma_device_type": 2 00:10:16.790 } 00:10:16.790 ], 00:10:16.790 "driver_specific": {} 00:10:16.790 } 00:10:16.790 ] 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.790 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.791 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.791 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.791 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.791 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.791 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.791 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.791 "name": "Existed_Raid", 00:10:16.791 "uuid": "a2d0243b-0832-4081-be7f-7c30598fb48a", 00:10:16.791 "strip_size_kb": 64, 00:10:16.791 "state": "online", 00:10:16.791 "raid_level": "raid0", 00:10:16.791 "superblock": false, 00:10:16.791 "num_base_bdevs": 3, 00:10:16.791 "num_base_bdevs_discovered": 3, 00:10:16.791 "num_base_bdevs_operational": 3, 00:10:16.791 "base_bdevs_list": [ 00:10:16.791 { 00:10:16.791 "name": "BaseBdev1", 00:10:16.791 "uuid": "b0b3c545-7020-4833-9944-7ae9d5b947f8", 00:10:16.791 "is_configured": true, 00:10:16.791 "data_offset": 0, 00:10:16.791 "data_size": 65536 00:10:16.791 }, 00:10:16.791 { 00:10:16.791 "name": "BaseBdev2", 00:10:16.791 "uuid": "db6b2e5c-e7c7-43d6-84c3-ad79abd49315", 00:10:16.791 "is_configured": true, 00:10:16.791 "data_offset": 0, 00:10:16.791 "data_size": 65536 00:10:16.791 }, 00:10:16.791 { 00:10:16.791 "name": "BaseBdev3", 00:10:16.791 "uuid": "dcd4f8dc-c450-4dbf-8cf1-8d431c859a3d", 00:10:16.791 "is_configured": true, 00:10:16.791 "data_offset": 0, 00:10:16.791 "data_size": 65536 00:10:16.791 } 00:10:16.791 ] 00:10:16.791 }' 00:10:16.791 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.791 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.359 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.359 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.359 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.359 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.359 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.359 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.359 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.359 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.359 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.359 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.359 [2024-10-17 20:07:02.893317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.359 20:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.359 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.359 "name": "Existed_Raid", 00:10:17.359 "aliases": [ 00:10:17.359 "a2d0243b-0832-4081-be7f-7c30598fb48a" 00:10:17.359 ], 00:10:17.359 "product_name": "Raid Volume", 00:10:17.359 "block_size": 512, 00:10:17.359 "num_blocks": 196608, 00:10:17.359 "uuid": "a2d0243b-0832-4081-be7f-7c30598fb48a", 00:10:17.359 "assigned_rate_limits": { 00:10:17.359 "rw_ios_per_sec": 0, 00:10:17.359 "rw_mbytes_per_sec": 0, 00:10:17.359 "r_mbytes_per_sec": 0, 00:10:17.359 "w_mbytes_per_sec": 0 00:10:17.359 }, 00:10:17.359 "claimed": false, 00:10:17.359 "zoned": false, 00:10:17.359 "supported_io_types": { 00:10:17.359 "read": true, 00:10:17.359 "write": true, 00:10:17.359 "unmap": true, 00:10:17.359 "flush": true, 00:10:17.359 "reset": true, 00:10:17.359 "nvme_admin": false, 00:10:17.359 "nvme_io": false, 00:10:17.359 "nvme_io_md": false, 00:10:17.359 "write_zeroes": true, 00:10:17.359 "zcopy": false, 00:10:17.359 "get_zone_info": false, 00:10:17.359 "zone_management": false, 00:10:17.359 "zone_append": false, 00:10:17.359 "compare": false, 00:10:17.359 "compare_and_write": false, 00:10:17.359 "abort": false, 00:10:17.359 "seek_hole": false, 00:10:17.359 "seek_data": false, 00:10:17.359 "copy": false, 00:10:17.359 "nvme_iov_md": false 00:10:17.359 }, 00:10:17.359 "memory_domains": [ 00:10:17.359 { 00:10:17.359 "dma_device_id": "system", 00:10:17.359 "dma_device_type": 1 00:10:17.359 }, 00:10:17.359 { 00:10:17.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.359 "dma_device_type": 2 00:10:17.359 }, 00:10:17.359 { 00:10:17.359 "dma_device_id": "system", 00:10:17.359 "dma_device_type": 1 00:10:17.359 }, 00:10:17.359 { 00:10:17.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.359 "dma_device_type": 2 00:10:17.359 }, 00:10:17.359 { 00:10:17.359 "dma_device_id": "system", 00:10:17.359 "dma_device_type": 1 00:10:17.359 }, 00:10:17.359 { 00:10:17.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.359 "dma_device_type": 2 00:10:17.359 } 00:10:17.359 ], 00:10:17.359 "driver_specific": { 00:10:17.359 "raid": { 00:10:17.359 "uuid": "a2d0243b-0832-4081-be7f-7c30598fb48a", 00:10:17.359 "strip_size_kb": 64, 00:10:17.359 "state": "online", 00:10:17.359 "raid_level": "raid0", 00:10:17.359 "superblock": false, 00:10:17.359 "num_base_bdevs": 3, 00:10:17.359 "num_base_bdevs_discovered": 3, 00:10:17.359 "num_base_bdevs_operational": 3, 00:10:17.360 "base_bdevs_list": [ 00:10:17.360 { 00:10:17.360 "name": "BaseBdev1", 00:10:17.360 "uuid": "b0b3c545-7020-4833-9944-7ae9d5b947f8", 00:10:17.360 "is_configured": true, 00:10:17.360 "data_offset": 0, 00:10:17.360 "data_size": 65536 00:10:17.360 }, 00:10:17.360 { 00:10:17.360 "name": "BaseBdev2", 00:10:17.360 "uuid": "db6b2e5c-e7c7-43d6-84c3-ad79abd49315", 00:10:17.360 "is_configured": true, 00:10:17.360 "data_offset": 0, 00:10:17.360 "data_size": 65536 00:10:17.360 }, 00:10:17.360 { 00:10:17.360 "name": "BaseBdev3", 00:10:17.360 "uuid": "dcd4f8dc-c450-4dbf-8cf1-8d431c859a3d", 00:10:17.360 "is_configured": true, 00:10:17.360 "data_offset": 0, 00:10:17.360 "data_size": 65536 00:10:17.360 } 00:10:17.360 ] 00:10:17.360 } 00:10:17.360 } 00:10:17.360 }' 00:10:17.360 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.360 20:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:17.360 BaseBdev2 00:10:17.360 BaseBdev3' 00:10:17.360 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.618 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.618 [2024-10-17 20:07:03.221079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.619 [2024-10-17 20:07:03.221129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.619 [2024-10-17 20:07:03.221203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.878 "name": "Existed_Raid", 00:10:17.878 "uuid": "a2d0243b-0832-4081-be7f-7c30598fb48a", 00:10:17.878 "strip_size_kb": 64, 00:10:17.878 "state": "offline", 00:10:17.878 "raid_level": "raid0", 00:10:17.878 "superblock": false, 00:10:17.878 "num_base_bdevs": 3, 00:10:17.878 "num_base_bdevs_discovered": 2, 00:10:17.878 "num_base_bdevs_operational": 2, 00:10:17.878 "base_bdevs_list": [ 00:10:17.878 { 00:10:17.878 "name": null, 00:10:17.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.878 "is_configured": false, 00:10:17.878 "data_offset": 0, 00:10:17.878 "data_size": 65536 00:10:17.878 }, 00:10:17.878 { 00:10:17.878 "name": "BaseBdev2", 00:10:17.878 "uuid": "db6b2e5c-e7c7-43d6-84c3-ad79abd49315", 00:10:17.878 "is_configured": true, 00:10:17.878 "data_offset": 0, 00:10:17.878 "data_size": 65536 00:10:17.878 }, 00:10:17.878 { 00:10:17.878 "name": "BaseBdev3", 00:10:17.878 "uuid": "dcd4f8dc-c450-4dbf-8cf1-8d431c859a3d", 00:10:17.878 "is_configured": true, 00:10:17.878 "data_offset": 0, 00:10:17.878 "data_size": 65536 00:10:17.878 } 00:10:17.878 ] 00:10:17.878 }' 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.878 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.446 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:18.446 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.446 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.446 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.446 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.446 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.446 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.446 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.446 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.446 20:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:18.446 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.446 20:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.446 [2024-10-17 20:07:03.927762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.447 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.447 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.447 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.447 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.447 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.447 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.447 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.447 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.447 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.447 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.447 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:18.447 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.447 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.447 [2024-10-17 20:07:04.078877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.447 [2024-10-17 20:07:04.079008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.706 BaseBdev2 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.706 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.706 [ 00:10:18.706 { 00:10:18.706 "name": "BaseBdev2", 00:10:18.706 "aliases": [ 00:10:18.706 "ef8b9eb6-156b-407b-a930-2286125d805b" 00:10:18.706 ], 00:10:18.706 "product_name": "Malloc disk", 00:10:18.706 "block_size": 512, 00:10:18.706 "num_blocks": 65536, 00:10:18.706 "uuid": "ef8b9eb6-156b-407b-a930-2286125d805b", 00:10:18.706 "assigned_rate_limits": { 00:10:18.706 "rw_ios_per_sec": 0, 00:10:18.707 "rw_mbytes_per_sec": 0, 00:10:18.707 "r_mbytes_per_sec": 0, 00:10:18.707 "w_mbytes_per_sec": 0 00:10:18.707 }, 00:10:18.707 "claimed": false, 00:10:18.707 "zoned": false, 00:10:18.707 "supported_io_types": { 00:10:18.707 "read": true, 00:10:18.707 "write": true, 00:10:18.707 "unmap": true, 00:10:18.707 "flush": true, 00:10:18.707 "reset": true, 00:10:18.707 "nvme_admin": false, 00:10:18.707 "nvme_io": false, 00:10:18.707 "nvme_io_md": false, 00:10:18.707 "write_zeroes": true, 00:10:18.707 "zcopy": true, 00:10:18.707 "get_zone_info": false, 00:10:18.707 "zone_management": false, 00:10:18.707 "zone_append": false, 00:10:18.707 "compare": false, 00:10:18.707 "compare_and_write": false, 00:10:18.707 "abort": true, 00:10:18.707 "seek_hole": false, 00:10:18.707 "seek_data": false, 00:10:18.707 "copy": true, 00:10:18.707 "nvme_iov_md": false 00:10:18.707 }, 00:10:18.707 "memory_domains": [ 00:10:18.707 { 00:10:18.707 "dma_device_id": "system", 00:10:18.707 "dma_device_type": 1 00:10:18.707 }, 00:10:18.707 { 00:10:18.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.707 "dma_device_type": 2 00:10:18.707 } 00:10:18.707 ], 00:10:18.707 "driver_specific": {} 00:10:18.707 } 00:10:18.707 ] 00:10:18.707 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.707 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.707 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.707 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.707 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:18.707 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.707 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.966 BaseBdev3 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.966 [ 00:10:18.966 { 00:10:18.966 "name": "BaseBdev3", 00:10:18.966 "aliases": [ 00:10:18.966 "352ffb3b-d601-4a28-8495-78e93ea45566" 00:10:18.966 ], 00:10:18.966 "product_name": "Malloc disk", 00:10:18.966 "block_size": 512, 00:10:18.966 "num_blocks": 65536, 00:10:18.966 "uuid": "352ffb3b-d601-4a28-8495-78e93ea45566", 00:10:18.966 "assigned_rate_limits": { 00:10:18.966 "rw_ios_per_sec": 0, 00:10:18.966 "rw_mbytes_per_sec": 0, 00:10:18.966 "r_mbytes_per_sec": 0, 00:10:18.966 "w_mbytes_per_sec": 0 00:10:18.966 }, 00:10:18.966 "claimed": false, 00:10:18.966 "zoned": false, 00:10:18.966 "supported_io_types": { 00:10:18.966 "read": true, 00:10:18.966 "write": true, 00:10:18.966 "unmap": true, 00:10:18.966 "flush": true, 00:10:18.966 "reset": true, 00:10:18.966 "nvme_admin": false, 00:10:18.966 "nvme_io": false, 00:10:18.966 "nvme_io_md": false, 00:10:18.966 "write_zeroes": true, 00:10:18.966 "zcopy": true, 00:10:18.966 "get_zone_info": false, 00:10:18.966 "zone_management": false, 00:10:18.966 "zone_append": false, 00:10:18.966 "compare": false, 00:10:18.966 "compare_and_write": false, 00:10:18.966 "abort": true, 00:10:18.966 "seek_hole": false, 00:10:18.966 "seek_data": false, 00:10:18.966 "copy": true, 00:10:18.966 "nvme_iov_md": false 00:10:18.966 }, 00:10:18.966 "memory_domains": [ 00:10:18.966 { 00:10:18.966 "dma_device_id": "system", 00:10:18.966 "dma_device_type": 1 00:10:18.966 }, 00:10:18.966 { 00:10:18.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.966 "dma_device_type": 2 00:10:18.966 } 00:10:18.966 ], 00:10:18.966 "driver_specific": {} 00:10:18.966 } 00:10:18.966 ] 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.966 [2024-10-17 20:07:04.402782] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.966 [2024-10-17 20:07:04.402840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.966 [2024-10-17 20:07:04.402872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.966 [2024-10-17 20:07:04.405493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.966 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.967 "name": "Existed_Raid", 00:10:18.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.967 "strip_size_kb": 64, 00:10:18.967 "state": "configuring", 00:10:18.967 "raid_level": "raid0", 00:10:18.967 "superblock": false, 00:10:18.967 "num_base_bdevs": 3, 00:10:18.967 "num_base_bdevs_discovered": 2, 00:10:18.967 "num_base_bdevs_operational": 3, 00:10:18.967 "base_bdevs_list": [ 00:10:18.967 { 00:10:18.967 "name": "BaseBdev1", 00:10:18.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.967 "is_configured": false, 00:10:18.967 "data_offset": 0, 00:10:18.967 "data_size": 0 00:10:18.967 }, 00:10:18.967 { 00:10:18.967 "name": "BaseBdev2", 00:10:18.967 "uuid": "ef8b9eb6-156b-407b-a930-2286125d805b", 00:10:18.967 "is_configured": true, 00:10:18.967 "data_offset": 0, 00:10:18.967 "data_size": 65536 00:10:18.967 }, 00:10:18.967 { 00:10:18.967 "name": "BaseBdev3", 00:10:18.967 "uuid": "352ffb3b-d601-4a28-8495-78e93ea45566", 00:10:18.967 "is_configured": true, 00:10:18.967 "data_offset": 0, 00:10:18.967 "data_size": 65536 00:10:18.967 } 00:10:18.967 ] 00:10:18.967 }' 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.967 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.534 [2024-10-17 20:07:04.963094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.534 20:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.534 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.534 "name": "Existed_Raid", 00:10:19.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.534 "strip_size_kb": 64, 00:10:19.534 "state": "configuring", 00:10:19.534 "raid_level": "raid0", 00:10:19.534 "superblock": false, 00:10:19.534 "num_base_bdevs": 3, 00:10:19.534 "num_base_bdevs_discovered": 1, 00:10:19.534 "num_base_bdevs_operational": 3, 00:10:19.534 "base_bdevs_list": [ 00:10:19.535 { 00:10:19.535 "name": "BaseBdev1", 00:10:19.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.535 "is_configured": false, 00:10:19.535 "data_offset": 0, 00:10:19.535 "data_size": 0 00:10:19.535 }, 00:10:19.535 { 00:10:19.535 "name": null, 00:10:19.535 "uuid": "ef8b9eb6-156b-407b-a930-2286125d805b", 00:10:19.535 "is_configured": false, 00:10:19.535 "data_offset": 0, 00:10:19.535 "data_size": 65536 00:10:19.535 }, 00:10:19.535 { 00:10:19.535 "name": "BaseBdev3", 00:10:19.535 "uuid": "352ffb3b-d601-4a28-8495-78e93ea45566", 00:10:19.535 "is_configured": true, 00:10:19.535 "data_offset": 0, 00:10:19.535 "data_size": 65536 00:10:19.535 } 00:10:19.535 ] 00:10:19.535 }' 00:10:19.535 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.535 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.105 [2024-10-17 20:07:05.595753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.105 BaseBdev1 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.105 [ 00:10:20.105 { 00:10:20.105 "name": "BaseBdev1", 00:10:20.105 "aliases": [ 00:10:20.105 "5368bbdd-48c3-4aaa-a0cd-e9291e701dca" 00:10:20.105 ], 00:10:20.105 "product_name": "Malloc disk", 00:10:20.105 "block_size": 512, 00:10:20.105 "num_blocks": 65536, 00:10:20.105 "uuid": "5368bbdd-48c3-4aaa-a0cd-e9291e701dca", 00:10:20.105 "assigned_rate_limits": { 00:10:20.105 "rw_ios_per_sec": 0, 00:10:20.105 "rw_mbytes_per_sec": 0, 00:10:20.105 "r_mbytes_per_sec": 0, 00:10:20.105 "w_mbytes_per_sec": 0 00:10:20.105 }, 00:10:20.105 "claimed": true, 00:10:20.105 "claim_type": "exclusive_write", 00:10:20.105 "zoned": false, 00:10:20.105 "supported_io_types": { 00:10:20.105 "read": true, 00:10:20.105 "write": true, 00:10:20.105 "unmap": true, 00:10:20.105 "flush": true, 00:10:20.105 "reset": true, 00:10:20.105 "nvme_admin": false, 00:10:20.105 "nvme_io": false, 00:10:20.105 "nvme_io_md": false, 00:10:20.105 "write_zeroes": true, 00:10:20.105 "zcopy": true, 00:10:20.105 "get_zone_info": false, 00:10:20.105 "zone_management": false, 00:10:20.105 "zone_append": false, 00:10:20.105 "compare": false, 00:10:20.105 "compare_and_write": false, 00:10:20.105 "abort": true, 00:10:20.105 "seek_hole": false, 00:10:20.105 "seek_data": false, 00:10:20.105 "copy": true, 00:10:20.105 "nvme_iov_md": false 00:10:20.105 }, 00:10:20.105 "memory_domains": [ 00:10:20.105 { 00:10:20.105 "dma_device_id": "system", 00:10:20.105 "dma_device_type": 1 00:10:20.105 }, 00:10:20.105 { 00:10:20.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.105 "dma_device_type": 2 00:10:20.105 } 00:10:20.105 ], 00:10:20.105 "driver_specific": {} 00:10:20.105 } 00:10:20.105 ] 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.105 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.106 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.106 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.106 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.106 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.106 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.106 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.106 "name": "Existed_Raid", 00:10:20.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.106 "strip_size_kb": 64, 00:10:20.106 "state": "configuring", 00:10:20.106 "raid_level": "raid0", 00:10:20.106 "superblock": false, 00:10:20.106 "num_base_bdevs": 3, 00:10:20.106 "num_base_bdevs_discovered": 2, 00:10:20.106 "num_base_bdevs_operational": 3, 00:10:20.106 "base_bdevs_list": [ 00:10:20.106 { 00:10:20.106 "name": "BaseBdev1", 00:10:20.106 "uuid": "5368bbdd-48c3-4aaa-a0cd-e9291e701dca", 00:10:20.106 "is_configured": true, 00:10:20.106 "data_offset": 0, 00:10:20.106 "data_size": 65536 00:10:20.106 }, 00:10:20.106 { 00:10:20.106 "name": null, 00:10:20.106 "uuid": "ef8b9eb6-156b-407b-a930-2286125d805b", 00:10:20.106 "is_configured": false, 00:10:20.106 "data_offset": 0, 00:10:20.106 "data_size": 65536 00:10:20.106 }, 00:10:20.106 { 00:10:20.106 "name": "BaseBdev3", 00:10:20.106 "uuid": "352ffb3b-d601-4a28-8495-78e93ea45566", 00:10:20.106 "is_configured": true, 00:10:20.106 "data_offset": 0, 00:10:20.106 "data_size": 65536 00:10:20.106 } 00:10:20.106 ] 00:10:20.106 }' 00:10:20.106 20:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.106 20:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.674 [2024-10-17 20:07:06.228105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.674 "name": "Existed_Raid", 00:10:20.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.674 "strip_size_kb": 64, 00:10:20.674 "state": "configuring", 00:10:20.674 "raid_level": "raid0", 00:10:20.674 "superblock": false, 00:10:20.674 "num_base_bdevs": 3, 00:10:20.674 "num_base_bdevs_discovered": 1, 00:10:20.674 "num_base_bdevs_operational": 3, 00:10:20.674 "base_bdevs_list": [ 00:10:20.674 { 00:10:20.674 "name": "BaseBdev1", 00:10:20.674 "uuid": "5368bbdd-48c3-4aaa-a0cd-e9291e701dca", 00:10:20.674 "is_configured": true, 00:10:20.674 "data_offset": 0, 00:10:20.674 "data_size": 65536 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "name": null, 00:10:20.674 "uuid": "ef8b9eb6-156b-407b-a930-2286125d805b", 00:10:20.674 "is_configured": false, 00:10:20.674 "data_offset": 0, 00:10:20.674 "data_size": 65536 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "name": null, 00:10:20.674 "uuid": "352ffb3b-d601-4a28-8495-78e93ea45566", 00:10:20.674 "is_configured": false, 00:10:20.674 "data_offset": 0, 00:10:20.674 "data_size": 65536 00:10:20.674 } 00:10:20.674 ] 00:10:20.674 }' 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.674 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.242 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.242 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:21.242 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.242 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.242 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.242 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:21.242 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:21.242 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.242 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.242 [2024-10-17 20:07:06.836510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.242 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.243 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.501 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.501 "name": "Existed_Raid", 00:10:21.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.501 "strip_size_kb": 64, 00:10:21.501 "state": "configuring", 00:10:21.501 "raid_level": "raid0", 00:10:21.501 "superblock": false, 00:10:21.501 "num_base_bdevs": 3, 00:10:21.501 "num_base_bdevs_discovered": 2, 00:10:21.501 "num_base_bdevs_operational": 3, 00:10:21.501 "base_bdevs_list": [ 00:10:21.501 { 00:10:21.501 "name": "BaseBdev1", 00:10:21.501 "uuid": "5368bbdd-48c3-4aaa-a0cd-e9291e701dca", 00:10:21.501 "is_configured": true, 00:10:21.501 "data_offset": 0, 00:10:21.501 "data_size": 65536 00:10:21.501 }, 00:10:21.501 { 00:10:21.501 "name": null, 00:10:21.501 "uuid": "ef8b9eb6-156b-407b-a930-2286125d805b", 00:10:21.501 "is_configured": false, 00:10:21.501 "data_offset": 0, 00:10:21.501 "data_size": 65536 00:10:21.501 }, 00:10:21.501 { 00:10:21.501 "name": "BaseBdev3", 00:10:21.501 "uuid": "352ffb3b-d601-4a28-8495-78e93ea45566", 00:10:21.501 "is_configured": true, 00:10:21.501 "data_offset": 0, 00:10:21.501 "data_size": 65536 00:10:21.501 } 00:10:21.501 ] 00:10:21.501 }' 00:10:21.501 20:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.501 20:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.759 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:21.759 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.759 20:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.759 20:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.759 20:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.017 [2024-10-17 20:07:07.432633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.017 "name": "Existed_Raid", 00:10:22.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.017 "strip_size_kb": 64, 00:10:22.017 "state": "configuring", 00:10:22.017 "raid_level": "raid0", 00:10:22.017 "superblock": false, 00:10:22.017 "num_base_bdevs": 3, 00:10:22.017 "num_base_bdevs_discovered": 1, 00:10:22.017 "num_base_bdevs_operational": 3, 00:10:22.017 "base_bdevs_list": [ 00:10:22.017 { 00:10:22.017 "name": null, 00:10:22.017 "uuid": "5368bbdd-48c3-4aaa-a0cd-e9291e701dca", 00:10:22.017 "is_configured": false, 00:10:22.017 "data_offset": 0, 00:10:22.017 "data_size": 65536 00:10:22.017 }, 00:10:22.017 { 00:10:22.017 "name": null, 00:10:22.017 "uuid": "ef8b9eb6-156b-407b-a930-2286125d805b", 00:10:22.017 "is_configured": false, 00:10:22.017 "data_offset": 0, 00:10:22.017 "data_size": 65536 00:10:22.017 }, 00:10:22.017 { 00:10:22.017 "name": "BaseBdev3", 00:10:22.017 "uuid": "352ffb3b-d601-4a28-8495-78e93ea45566", 00:10:22.017 "is_configured": true, 00:10:22.017 "data_offset": 0, 00:10:22.017 "data_size": 65536 00:10:22.017 } 00:10:22.017 ] 00:10:22.017 }' 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.017 20:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.585 [2024-10-17 20:07:08.124814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.585 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.586 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.586 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.586 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.586 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.586 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.586 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.586 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.586 "name": "Existed_Raid", 00:10:22.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.586 "strip_size_kb": 64, 00:10:22.586 "state": "configuring", 00:10:22.586 "raid_level": "raid0", 00:10:22.586 "superblock": false, 00:10:22.586 "num_base_bdevs": 3, 00:10:22.586 "num_base_bdevs_discovered": 2, 00:10:22.586 "num_base_bdevs_operational": 3, 00:10:22.586 "base_bdevs_list": [ 00:10:22.586 { 00:10:22.586 "name": null, 00:10:22.586 "uuid": "5368bbdd-48c3-4aaa-a0cd-e9291e701dca", 00:10:22.586 "is_configured": false, 00:10:22.586 "data_offset": 0, 00:10:22.586 "data_size": 65536 00:10:22.586 }, 00:10:22.586 { 00:10:22.586 "name": "BaseBdev2", 00:10:22.586 "uuid": "ef8b9eb6-156b-407b-a930-2286125d805b", 00:10:22.586 "is_configured": true, 00:10:22.586 "data_offset": 0, 00:10:22.586 "data_size": 65536 00:10:22.586 }, 00:10:22.586 { 00:10:22.586 "name": "BaseBdev3", 00:10:22.586 "uuid": "352ffb3b-d601-4a28-8495-78e93ea45566", 00:10:22.586 "is_configured": true, 00:10:22.586 "data_offset": 0, 00:10:22.586 "data_size": 65536 00:10:22.586 } 00:10:22.586 ] 00:10:22.586 }' 00:10:22.586 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.586 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5368bbdd-48c3-4aaa-a0cd-e9291e701dca 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.184 [2024-10-17 20:07:08.799552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:23.184 [2024-10-17 20:07:08.799813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:23.184 [2024-10-17 20:07:08.799844] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:23.184 [2024-10-17 20:07:08.800233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:23.184 [2024-10-17 20:07:08.800429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:23.184 [2024-10-17 20:07:08.800475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:23.184 [2024-10-17 20:07:08.800835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.184 NewBaseBdev 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.184 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.184 [ 00:10:23.184 { 00:10:23.184 "name": "NewBaseBdev", 00:10:23.184 "aliases": [ 00:10:23.184 "5368bbdd-48c3-4aaa-a0cd-e9291e701dca" 00:10:23.184 ], 00:10:23.184 "product_name": "Malloc disk", 00:10:23.184 "block_size": 512, 00:10:23.184 "num_blocks": 65536, 00:10:23.184 "uuid": "5368bbdd-48c3-4aaa-a0cd-e9291e701dca", 00:10:23.184 "assigned_rate_limits": { 00:10:23.184 "rw_ios_per_sec": 0, 00:10:23.184 "rw_mbytes_per_sec": 0, 00:10:23.184 "r_mbytes_per_sec": 0, 00:10:23.184 "w_mbytes_per_sec": 0 00:10:23.184 }, 00:10:23.184 "claimed": true, 00:10:23.184 "claim_type": "exclusive_write", 00:10:23.184 "zoned": false, 00:10:23.184 "supported_io_types": { 00:10:23.184 "read": true, 00:10:23.184 "write": true, 00:10:23.184 "unmap": true, 00:10:23.184 "flush": true, 00:10:23.184 "reset": true, 00:10:23.184 "nvme_admin": false, 00:10:23.184 "nvme_io": false, 00:10:23.184 "nvme_io_md": false, 00:10:23.184 "write_zeroes": true, 00:10:23.184 "zcopy": true, 00:10:23.184 "get_zone_info": false, 00:10:23.184 "zone_management": false, 00:10:23.184 "zone_append": false, 00:10:23.184 "compare": false, 00:10:23.184 "compare_and_write": false, 00:10:23.184 "abort": true, 00:10:23.184 "seek_hole": false, 00:10:23.184 "seek_data": false, 00:10:23.184 "copy": true, 00:10:23.184 "nvme_iov_md": false 00:10:23.184 }, 00:10:23.184 "memory_domains": [ 00:10:23.184 { 00:10:23.184 "dma_device_id": "system", 00:10:23.443 "dma_device_type": 1 00:10:23.443 }, 00:10:23.443 { 00:10:23.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.443 "dma_device_type": 2 00:10:23.444 } 00:10:23.444 ], 00:10:23.444 "driver_specific": {} 00:10:23.444 } 00:10:23.444 ] 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.444 "name": "Existed_Raid", 00:10:23.444 "uuid": "49289ac1-7d94-40ed-bc76-0743f06f9df3", 00:10:23.444 "strip_size_kb": 64, 00:10:23.444 "state": "online", 00:10:23.444 "raid_level": "raid0", 00:10:23.444 "superblock": false, 00:10:23.444 "num_base_bdevs": 3, 00:10:23.444 "num_base_bdevs_discovered": 3, 00:10:23.444 "num_base_bdevs_operational": 3, 00:10:23.444 "base_bdevs_list": [ 00:10:23.444 { 00:10:23.444 "name": "NewBaseBdev", 00:10:23.444 "uuid": "5368bbdd-48c3-4aaa-a0cd-e9291e701dca", 00:10:23.444 "is_configured": true, 00:10:23.444 "data_offset": 0, 00:10:23.444 "data_size": 65536 00:10:23.444 }, 00:10:23.444 { 00:10:23.444 "name": "BaseBdev2", 00:10:23.444 "uuid": "ef8b9eb6-156b-407b-a930-2286125d805b", 00:10:23.444 "is_configured": true, 00:10:23.444 "data_offset": 0, 00:10:23.444 "data_size": 65536 00:10:23.444 }, 00:10:23.444 { 00:10:23.444 "name": "BaseBdev3", 00:10:23.444 "uuid": "352ffb3b-d601-4a28-8495-78e93ea45566", 00:10:23.444 "is_configured": true, 00:10:23.444 "data_offset": 0, 00:10:23.444 "data_size": 65536 00:10:23.444 } 00:10:23.444 ] 00:10:23.444 }' 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.444 20:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.012 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:24.012 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:24.012 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.012 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.012 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.012 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.012 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:24.012 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.012 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.013 [2024-10-17 20:07:09.380295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:24.013 "name": "Existed_Raid", 00:10:24.013 "aliases": [ 00:10:24.013 "49289ac1-7d94-40ed-bc76-0743f06f9df3" 00:10:24.013 ], 00:10:24.013 "product_name": "Raid Volume", 00:10:24.013 "block_size": 512, 00:10:24.013 "num_blocks": 196608, 00:10:24.013 "uuid": "49289ac1-7d94-40ed-bc76-0743f06f9df3", 00:10:24.013 "assigned_rate_limits": { 00:10:24.013 "rw_ios_per_sec": 0, 00:10:24.013 "rw_mbytes_per_sec": 0, 00:10:24.013 "r_mbytes_per_sec": 0, 00:10:24.013 "w_mbytes_per_sec": 0 00:10:24.013 }, 00:10:24.013 "claimed": false, 00:10:24.013 "zoned": false, 00:10:24.013 "supported_io_types": { 00:10:24.013 "read": true, 00:10:24.013 "write": true, 00:10:24.013 "unmap": true, 00:10:24.013 "flush": true, 00:10:24.013 "reset": true, 00:10:24.013 "nvme_admin": false, 00:10:24.013 "nvme_io": false, 00:10:24.013 "nvme_io_md": false, 00:10:24.013 "write_zeroes": true, 00:10:24.013 "zcopy": false, 00:10:24.013 "get_zone_info": false, 00:10:24.013 "zone_management": false, 00:10:24.013 "zone_append": false, 00:10:24.013 "compare": false, 00:10:24.013 "compare_and_write": false, 00:10:24.013 "abort": false, 00:10:24.013 "seek_hole": false, 00:10:24.013 "seek_data": false, 00:10:24.013 "copy": false, 00:10:24.013 "nvme_iov_md": false 00:10:24.013 }, 00:10:24.013 "memory_domains": [ 00:10:24.013 { 00:10:24.013 "dma_device_id": "system", 00:10:24.013 "dma_device_type": 1 00:10:24.013 }, 00:10:24.013 { 00:10:24.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.013 "dma_device_type": 2 00:10:24.013 }, 00:10:24.013 { 00:10:24.013 "dma_device_id": "system", 00:10:24.013 "dma_device_type": 1 00:10:24.013 }, 00:10:24.013 { 00:10:24.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.013 "dma_device_type": 2 00:10:24.013 }, 00:10:24.013 { 00:10:24.013 "dma_device_id": "system", 00:10:24.013 "dma_device_type": 1 00:10:24.013 }, 00:10:24.013 { 00:10:24.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.013 "dma_device_type": 2 00:10:24.013 } 00:10:24.013 ], 00:10:24.013 "driver_specific": { 00:10:24.013 "raid": { 00:10:24.013 "uuid": "49289ac1-7d94-40ed-bc76-0743f06f9df3", 00:10:24.013 "strip_size_kb": 64, 00:10:24.013 "state": "online", 00:10:24.013 "raid_level": "raid0", 00:10:24.013 "superblock": false, 00:10:24.013 "num_base_bdevs": 3, 00:10:24.013 "num_base_bdevs_discovered": 3, 00:10:24.013 "num_base_bdevs_operational": 3, 00:10:24.013 "base_bdevs_list": [ 00:10:24.013 { 00:10:24.013 "name": "NewBaseBdev", 00:10:24.013 "uuid": "5368bbdd-48c3-4aaa-a0cd-e9291e701dca", 00:10:24.013 "is_configured": true, 00:10:24.013 "data_offset": 0, 00:10:24.013 "data_size": 65536 00:10:24.013 }, 00:10:24.013 { 00:10:24.013 "name": "BaseBdev2", 00:10:24.013 "uuid": "ef8b9eb6-156b-407b-a930-2286125d805b", 00:10:24.013 "is_configured": true, 00:10:24.013 "data_offset": 0, 00:10:24.013 "data_size": 65536 00:10:24.013 }, 00:10:24.013 { 00:10:24.013 "name": "BaseBdev3", 00:10:24.013 "uuid": "352ffb3b-d601-4a28-8495-78e93ea45566", 00:10:24.013 "is_configured": true, 00:10:24.013 "data_offset": 0, 00:10:24.013 "data_size": 65536 00:10:24.013 } 00:10:24.013 ] 00:10:24.013 } 00:10:24.013 } 00:10:24.013 }' 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:24.013 BaseBdev2 00:10:24.013 BaseBdev3' 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.013 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.273 [2024-10-17 20:07:09.708065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.273 [2024-10-17 20:07:09.708119] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.273 [2024-10-17 20:07:09.708245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.273 [2024-10-17 20:07:09.708318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.273 [2024-10-17 20:07:09.708339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63706 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63706 ']' 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63706 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63706 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63706' 00:10:24.273 killing process with pid 63706 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63706 00:10:24.273 [2024-10-17 20:07:09.747177] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.273 20:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63706 00:10:24.532 [2024-10-17 20:07:10.027616] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:25.910 00:10:25.910 real 0m12.388s 00:10:25.910 user 0m20.538s 00:10:25.910 sys 0m1.680s 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.910 ************************************ 00:10:25.910 END TEST raid_state_function_test 00:10:25.910 ************************************ 00:10:25.910 20:07:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:25.910 20:07:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:25.910 20:07:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.910 20:07:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.910 ************************************ 00:10:25.910 START TEST raid_state_function_test_sb 00:10:25.910 ************************************ 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:25.910 Process raid pid: 64344 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64344 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64344' 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64344 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64344 ']' 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.910 20:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.910 [2024-10-17 20:07:11.361082] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:10:25.910 [2024-10-17 20:07:11.361515] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.910 [2024-10-17 20:07:11.539928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.170 [2024-10-17 20:07:11.684276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.428 [2024-10-17 20:07:11.920335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.428 [2024-10-17 20:07:11.920700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.688 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.946 [2024-10-17 20:07:12.344156] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.946 [2024-10-17 20:07:12.344228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.946 [2024-10-17 20:07:12.344246] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.946 [2024-10-17 20:07:12.344264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.946 [2024-10-17 20:07:12.344275] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.946 [2024-10-17 20:07:12.344290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.946 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.946 "name": "Existed_Raid", 00:10:26.946 "uuid": "392ce54c-2294-47c5-a544-26f576a06290", 00:10:26.946 "strip_size_kb": 64, 00:10:26.946 "state": "configuring", 00:10:26.946 "raid_level": "raid0", 00:10:26.946 "superblock": true, 00:10:26.946 "num_base_bdevs": 3, 00:10:26.946 "num_base_bdevs_discovered": 0, 00:10:26.946 "num_base_bdevs_operational": 3, 00:10:26.946 "base_bdevs_list": [ 00:10:26.946 { 00:10:26.946 "name": "BaseBdev1", 00:10:26.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.946 "is_configured": false, 00:10:26.946 "data_offset": 0, 00:10:26.946 "data_size": 0 00:10:26.946 }, 00:10:26.946 { 00:10:26.946 "name": "BaseBdev2", 00:10:26.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.947 "is_configured": false, 00:10:26.947 "data_offset": 0, 00:10:26.947 "data_size": 0 00:10:26.947 }, 00:10:26.947 { 00:10:26.947 "name": "BaseBdev3", 00:10:26.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.947 "is_configured": false, 00:10:26.947 "data_offset": 0, 00:10:26.947 "data_size": 0 00:10:26.947 } 00:10:26.947 ] 00:10:26.947 }' 00:10:26.947 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.947 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.514 [2024-10-17 20:07:12.880213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.514 [2024-10-17 20:07:12.880408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.514 [2024-10-17 20:07:12.888254] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.514 [2024-10-17 20:07:12.888314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.514 [2024-10-17 20:07:12.888330] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.514 [2024-10-17 20:07:12.888347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.514 [2024-10-17 20:07:12.888357] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.514 [2024-10-17 20:07:12.888372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.514 [2024-10-17 20:07:12.936520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.514 BaseBdev1 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.514 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.514 [ 00:10:27.514 { 00:10:27.514 "name": "BaseBdev1", 00:10:27.514 "aliases": [ 00:10:27.514 "f8bf6e7a-0bcb-4e34-bcaa-2403aea36a70" 00:10:27.514 ], 00:10:27.514 "product_name": "Malloc disk", 00:10:27.514 "block_size": 512, 00:10:27.514 "num_blocks": 65536, 00:10:27.514 "uuid": "f8bf6e7a-0bcb-4e34-bcaa-2403aea36a70", 00:10:27.514 "assigned_rate_limits": { 00:10:27.514 "rw_ios_per_sec": 0, 00:10:27.514 "rw_mbytes_per_sec": 0, 00:10:27.514 "r_mbytes_per_sec": 0, 00:10:27.514 "w_mbytes_per_sec": 0 00:10:27.514 }, 00:10:27.514 "claimed": true, 00:10:27.514 "claim_type": "exclusive_write", 00:10:27.514 "zoned": false, 00:10:27.514 "supported_io_types": { 00:10:27.514 "read": true, 00:10:27.514 "write": true, 00:10:27.514 "unmap": true, 00:10:27.514 "flush": true, 00:10:27.514 "reset": true, 00:10:27.514 "nvme_admin": false, 00:10:27.514 "nvme_io": false, 00:10:27.514 "nvme_io_md": false, 00:10:27.514 "write_zeroes": true, 00:10:27.514 "zcopy": true, 00:10:27.514 "get_zone_info": false, 00:10:27.514 "zone_management": false, 00:10:27.514 "zone_append": false, 00:10:27.514 "compare": false, 00:10:27.514 "compare_and_write": false, 00:10:27.514 "abort": true, 00:10:27.514 "seek_hole": false, 00:10:27.514 "seek_data": false, 00:10:27.514 "copy": true, 00:10:27.514 "nvme_iov_md": false 00:10:27.514 }, 00:10:27.514 "memory_domains": [ 00:10:27.514 { 00:10:27.514 "dma_device_id": "system", 00:10:27.515 "dma_device_type": 1 00:10:27.515 }, 00:10:27.515 { 00:10:27.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.515 "dma_device_type": 2 00:10:27.515 } 00:10:27.515 ], 00:10:27.515 "driver_specific": {} 00:10:27.515 } 00:10:27.515 ] 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.515 20:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.515 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.515 "name": "Existed_Raid", 00:10:27.515 "uuid": "97081446-4ab2-4ba8-b744-22e4aee4c747", 00:10:27.515 "strip_size_kb": 64, 00:10:27.515 "state": "configuring", 00:10:27.515 "raid_level": "raid0", 00:10:27.515 "superblock": true, 00:10:27.515 "num_base_bdevs": 3, 00:10:27.515 "num_base_bdevs_discovered": 1, 00:10:27.515 "num_base_bdevs_operational": 3, 00:10:27.515 "base_bdevs_list": [ 00:10:27.515 { 00:10:27.515 "name": "BaseBdev1", 00:10:27.515 "uuid": "f8bf6e7a-0bcb-4e34-bcaa-2403aea36a70", 00:10:27.515 "is_configured": true, 00:10:27.515 "data_offset": 2048, 00:10:27.515 "data_size": 63488 00:10:27.515 }, 00:10:27.515 { 00:10:27.515 "name": "BaseBdev2", 00:10:27.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.515 "is_configured": false, 00:10:27.515 "data_offset": 0, 00:10:27.515 "data_size": 0 00:10:27.515 }, 00:10:27.515 { 00:10:27.515 "name": "BaseBdev3", 00:10:27.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.515 "is_configured": false, 00:10:27.515 "data_offset": 0, 00:10:27.515 "data_size": 0 00:10:27.515 } 00:10:27.515 ] 00:10:27.515 }' 00:10:27.515 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.515 20:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.082 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:28.082 20:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.082 20:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.082 [2024-10-17 20:07:13.504875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.082 [2024-10-17 20:07:13.504949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:28.082 20:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.082 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:28.082 20:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.082 20:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.082 [2024-10-17 20:07:13.512917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.082 [2024-10-17 20:07:13.515484] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:28.082 [2024-10-17 20:07:13.515539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:28.082 [2024-10-17 20:07:13.515557] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:28.082 [2024-10-17 20:07:13.515585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:28.082 20:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.082 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.083 "name": "Existed_Raid", 00:10:28.083 "uuid": "5df5e55a-e8e0-4638-bfcb-6feb797abb26", 00:10:28.083 "strip_size_kb": 64, 00:10:28.083 "state": "configuring", 00:10:28.083 "raid_level": "raid0", 00:10:28.083 "superblock": true, 00:10:28.083 "num_base_bdevs": 3, 00:10:28.083 "num_base_bdevs_discovered": 1, 00:10:28.083 "num_base_bdevs_operational": 3, 00:10:28.083 "base_bdevs_list": [ 00:10:28.083 { 00:10:28.083 "name": "BaseBdev1", 00:10:28.083 "uuid": "f8bf6e7a-0bcb-4e34-bcaa-2403aea36a70", 00:10:28.083 "is_configured": true, 00:10:28.083 "data_offset": 2048, 00:10:28.083 "data_size": 63488 00:10:28.083 }, 00:10:28.083 { 00:10:28.083 "name": "BaseBdev2", 00:10:28.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.083 "is_configured": false, 00:10:28.083 "data_offset": 0, 00:10:28.083 "data_size": 0 00:10:28.083 }, 00:10:28.083 { 00:10:28.083 "name": "BaseBdev3", 00:10:28.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.083 "is_configured": false, 00:10:28.083 "data_offset": 0, 00:10:28.083 "data_size": 0 00:10:28.083 } 00:10:28.083 ] 00:10:28.083 }' 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.083 20:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.650 [2024-10-17 20:07:14.093144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.650 BaseBdev2 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.650 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.650 [ 00:10:28.650 { 00:10:28.650 "name": "BaseBdev2", 00:10:28.650 "aliases": [ 00:10:28.650 "98d07eb0-e429-47e0-be3f-759b4ff0ceb5" 00:10:28.650 ], 00:10:28.650 "product_name": "Malloc disk", 00:10:28.650 "block_size": 512, 00:10:28.650 "num_blocks": 65536, 00:10:28.650 "uuid": "98d07eb0-e429-47e0-be3f-759b4ff0ceb5", 00:10:28.650 "assigned_rate_limits": { 00:10:28.651 "rw_ios_per_sec": 0, 00:10:28.651 "rw_mbytes_per_sec": 0, 00:10:28.651 "r_mbytes_per_sec": 0, 00:10:28.651 "w_mbytes_per_sec": 0 00:10:28.651 }, 00:10:28.651 "claimed": true, 00:10:28.651 "claim_type": "exclusive_write", 00:10:28.651 "zoned": false, 00:10:28.651 "supported_io_types": { 00:10:28.651 "read": true, 00:10:28.651 "write": true, 00:10:28.651 "unmap": true, 00:10:28.651 "flush": true, 00:10:28.651 "reset": true, 00:10:28.651 "nvme_admin": false, 00:10:28.651 "nvme_io": false, 00:10:28.651 "nvme_io_md": false, 00:10:28.651 "write_zeroes": true, 00:10:28.651 "zcopy": true, 00:10:28.651 "get_zone_info": false, 00:10:28.651 "zone_management": false, 00:10:28.651 "zone_append": false, 00:10:28.651 "compare": false, 00:10:28.651 "compare_and_write": false, 00:10:28.651 "abort": true, 00:10:28.651 "seek_hole": false, 00:10:28.651 "seek_data": false, 00:10:28.651 "copy": true, 00:10:28.651 "nvme_iov_md": false 00:10:28.651 }, 00:10:28.651 "memory_domains": [ 00:10:28.651 { 00:10:28.651 "dma_device_id": "system", 00:10:28.651 "dma_device_type": 1 00:10:28.651 }, 00:10:28.651 { 00:10:28.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.651 "dma_device_type": 2 00:10:28.651 } 00:10:28.651 ], 00:10:28.651 "driver_specific": {} 00:10:28.651 } 00:10:28.651 ] 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.651 "name": "Existed_Raid", 00:10:28.651 "uuid": "5df5e55a-e8e0-4638-bfcb-6feb797abb26", 00:10:28.651 "strip_size_kb": 64, 00:10:28.651 "state": "configuring", 00:10:28.651 "raid_level": "raid0", 00:10:28.651 "superblock": true, 00:10:28.651 "num_base_bdevs": 3, 00:10:28.651 "num_base_bdevs_discovered": 2, 00:10:28.651 "num_base_bdevs_operational": 3, 00:10:28.651 "base_bdevs_list": [ 00:10:28.651 { 00:10:28.651 "name": "BaseBdev1", 00:10:28.651 "uuid": "f8bf6e7a-0bcb-4e34-bcaa-2403aea36a70", 00:10:28.651 "is_configured": true, 00:10:28.651 "data_offset": 2048, 00:10:28.651 "data_size": 63488 00:10:28.651 }, 00:10:28.651 { 00:10:28.651 "name": "BaseBdev2", 00:10:28.651 "uuid": "98d07eb0-e429-47e0-be3f-759b4ff0ceb5", 00:10:28.651 "is_configured": true, 00:10:28.651 "data_offset": 2048, 00:10:28.651 "data_size": 63488 00:10:28.651 }, 00:10:28.651 { 00:10:28.651 "name": "BaseBdev3", 00:10:28.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.651 "is_configured": false, 00:10:28.651 "data_offset": 0, 00:10:28.651 "data_size": 0 00:10:28.651 } 00:10:28.651 ] 00:10:28.651 }' 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.651 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.225 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:29.225 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.225 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.225 [2024-10-17 20:07:14.698050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.225 [2024-10-17 20:07:14.698440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:29.225 [2024-10-17 20:07:14.698472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:29.225 BaseBdev3 00:10:29.225 [2024-10-17 20:07:14.698839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:29.225 [2024-10-17 20:07:14.699096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:29.225 [2024-10-17 20:07:14.699130] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:29.225 [2024-10-17 20:07:14.699346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.225 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.225 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:29.225 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:29.225 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.225 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:29.225 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.225 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.225 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.225 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.225 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.226 [ 00:10:29.226 { 00:10:29.226 "name": "BaseBdev3", 00:10:29.226 "aliases": [ 00:10:29.226 "7c92a759-c2d3-4246-a1d6-c52017e6e423" 00:10:29.226 ], 00:10:29.226 "product_name": "Malloc disk", 00:10:29.226 "block_size": 512, 00:10:29.226 "num_blocks": 65536, 00:10:29.226 "uuid": "7c92a759-c2d3-4246-a1d6-c52017e6e423", 00:10:29.226 "assigned_rate_limits": { 00:10:29.226 "rw_ios_per_sec": 0, 00:10:29.226 "rw_mbytes_per_sec": 0, 00:10:29.226 "r_mbytes_per_sec": 0, 00:10:29.226 "w_mbytes_per_sec": 0 00:10:29.226 }, 00:10:29.226 "claimed": true, 00:10:29.226 "claim_type": "exclusive_write", 00:10:29.226 "zoned": false, 00:10:29.226 "supported_io_types": { 00:10:29.226 "read": true, 00:10:29.226 "write": true, 00:10:29.226 "unmap": true, 00:10:29.226 "flush": true, 00:10:29.226 "reset": true, 00:10:29.226 "nvme_admin": false, 00:10:29.226 "nvme_io": false, 00:10:29.226 "nvme_io_md": false, 00:10:29.226 "write_zeroes": true, 00:10:29.226 "zcopy": true, 00:10:29.226 "get_zone_info": false, 00:10:29.226 "zone_management": false, 00:10:29.226 "zone_append": false, 00:10:29.226 "compare": false, 00:10:29.226 "compare_and_write": false, 00:10:29.226 "abort": true, 00:10:29.226 "seek_hole": false, 00:10:29.226 "seek_data": false, 00:10:29.226 "copy": true, 00:10:29.226 "nvme_iov_md": false 00:10:29.226 }, 00:10:29.226 "memory_domains": [ 00:10:29.226 { 00:10:29.226 "dma_device_id": "system", 00:10:29.226 "dma_device_type": 1 00:10:29.226 }, 00:10:29.226 { 00:10:29.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.226 "dma_device_type": 2 00:10:29.226 } 00:10:29.226 ], 00:10:29.226 "driver_specific": {} 00:10:29.226 } 00:10:29.226 ] 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.226 "name": "Existed_Raid", 00:10:29.226 "uuid": "5df5e55a-e8e0-4638-bfcb-6feb797abb26", 00:10:29.226 "strip_size_kb": 64, 00:10:29.226 "state": "online", 00:10:29.226 "raid_level": "raid0", 00:10:29.226 "superblock": true, 00:10:29.226 "num_base_bdevs": 3, 00:10:29.226 "num_base_bdevs_discovered": 3, 00:10:29.226 "num_base_bdevs_operational": 3, 00:10:29.226 "base_bdevs_list": [ 00:10:29.226 { 00:10:29.226 "name": "BaseBdev1", 00:10:29.226 "uuid": "f8bf6e7a-0bcb-4e34-bcaa-2403aea36a70", 00:10:29.226 "is_configured": true, 00:10:29.226 "data_offset": 2048, 00:10:29.226 "data_size": 63488 00:10:29.226 }, 00:10:29.226 { 00:10:29.226 "name": "BaseBdev2", 00:10:29.226 "uuid": "98d07eb0-e429-47e0-be3f-759b4ff0ceb5", 00:10:29.226 "is_configured": true, 00:10:29.226 "data_offset": 2048, 00:10:29.226 "data_size": 63488 00:10:29.226 }, 00:10:29.226 { 00:10:29.226 "name": "BaseBdev3", 00:10:29.226 "uuid": "7c92a759-c2d3-4246-a1d6-c52017e6e423", 00:10:29.226 "is_configured": true, 00:10:29.226 "data_offset": 2048, 00:10:29.226 "data_size": 63488 00:10:29.226 } 00:10:29.226 ] 00:10:29.226 }' 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.226 20:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.794 [2024-10-17 20:07:15.262692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.794 "name": "Existed_Raid", 00:10:29.794 "aliases": [ 00:10:29.794 "5df5e55a-e8e0-4638-bfcb-6feb797abb26" 00:10:29.794 ], 00:10:29.794 "product_name": "Raid Volume", 00:10:29.794 "block_size": 512, 00:10:29.794 "num_blocks": 190464, 00:10:29.794 "uuid": "5df5e55a-e8e0-4638-bfcb-6feb797abb26", 00:10:29.794 "assigned_rate_limits": { 00:10:29.794 "rw_ios_per_sec": 0, 00:10:29.794 "rw_mbytes_per_sec": 0, 00:10:29.794 "r_mbytes_per_sec": 0, 00:10:29.794 "w_mbytes_per_sec": 0 00:10:29.794 }, 00:10:29.794 "claimed": false, 00:10:29.794 "zoned": false, 00:10:29.794 "supported_io_types": { 00:10:29.794 "read": true, 00:10:29.794 "write": true, 00:10:29.794 "unmap": true, 00:10:29.794 "flush": true, 00:10:29.794 "reset": true, 00:10:29.794 "nvme_admin": false, 00:10:29.794 "nvme_io": false, 00:10:29.794 "nvme_io_md": false, 00:10:29.794 "write_zeroes": true, 00:10:29.794 "zcopy": false, 00:10:29.794 "get_zone_info": false, 00:10:29.794 "zone_management": false, 00:10:29.794 "zone_append": false, 00:10:29.794 "compare": false, 00:10:29.794 "compare_and_write": false, 00:10:29.794 "abort": false, 00:10:29.794 "seek_hole": false, 00:10:29.794 "seek_data": false, 00:10:29.794 "copy": false, 00:10:29.794 "nvme_iov_md": false 00:10:29.794 }, 00:10:29.794 "memory_domains": [ 00:10:29.794 { 00:10:29.794 "dma_device_id": "system", 00:10:29.794 "dma_device_type": 1 00:10:29.794 }, 00:10:29.794 { 00:10:29.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.794 "dma_device_type": 2 00:10:29.794 }, 00:10:29.794 { 00:10:29.794 "dma_device_id": "system", 00:10:29.794 "dma_device_type": 1 00:10:29.794 }, 00:10:29.794 { 00:10:29.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.794 "dma_device_type": 2 00:10:29.794 }, 00:10:29.794 { 00:10:29.794 "dma_device_id": "system", 00:10:29.794 "dma_device_type": 1 00:10:29.794 }, 00:10:29.794 { 00:10:29.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.794 "dma_device_type": 2 00:10:29.794 } 00:10:29.794 ], 00:10:29.794 "driver_specific": { 00:10:29.794 "raid": { 00:10:29.794 "uuid": "5df5e55a-e8e0-4638-bfcb-6feb797abb26", 00:10:29.794 "strip_size_kb": 64, 00:10:29.794 "state": "online", 00:10:29.794 "raid_level": "raid0", 00:10:29.794 "superblock": true, 00:10:29.794 "num_base_bdevs": 3, 00:10:29.794 "num_base_bdevs_discovered": 3, 00:10:29.794 "num_base_bdevs_operational": 3, 00:10:29.794 "base_bdevs_list": [ 00:10:29.794 { 00:10:29.794 "name": "BaseBdev1", 00:10:29.794 "uuid": "f8bf6e7a-0bcb-4e34-bcaa-2403aea36a70", 00:10:29.794 "is_configured": true, 00:10:29.794 "data_offset": 2048, 00:10:29.794 "data_size": 63488 00:10:29.794 }, 00:10:29.794 { 00:10:29.794 "name": "BaseBdev2", 00:10:29.794 "uuid": "98d07eb0-e429-47e0-be3f-759b4ff0ceb5", 00:10:29.794 "is_configured": true, 00:10:29.794 "data_offset": 2048, 00:10:29.794 "data_size": 63488 00:10:29.794 }, 00:10:29.794 { 00:10:29.794 "name": "BaseBdev3", 00:10:29.794 "uuid": "7c92a759-c2d3-4246-a1d6-c52017e6e423", 00:10:29.794 "is_configured": true, 00:10:29.794 "data_offset": 2048, 00:10:29.794 "data_size": 63488 00:10:29.794 } 00:10:29.794 ] 00:10:29.794 } 00:10:29.794 } 00:10:29.794 }' 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:29.794 BaseBdev2 00:10:29.794 BaseBdev3' 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.794 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.052 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.053 [2024-10-17 20:07:15.578501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:30.053 [2024-10-17 20:07:15.578536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.053 [2024-10-17 20:07:15.578611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.053 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.311 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.311 "name": "Existed_Raid", 00:10:30.311 "uuid": "5df5e55a-e8e0-4638-bfcb-6feb797abb26", 00:10:30.311 "strip_size_kb": 64, 00:10:30.311 "state": "offline", 00:10:30.311 "raid_level": "raid0", 00:10:30.311 "superblock": true, 00:10:30.311 "num_base_bdevs": 3, 00:10:30.311 "num_base_bdevs_discovered": 2, 00:10:30.311 "num_base_bdevs_operational": 2, 00:10:30.311 "base_bdevs_list": [ 00:10:30.311 { 00:10:30.311 "name": null, 00:10:30.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.311 "is_configured": false, 00:10:30.311 "data_offset": 0, 00:10:30.311 "data_size": 63488 00:10:30.311 }, 00:10:30.311 { 00:10:30.311 "name": "BaseBdev2", 00:10:30.311 "uuid": "98d07eb0-e429-47e0-be3f-759b4ff0ceb5", 00:10:30.311 "is_configured": true, 00:10:30.311 "data_offset": 2048, 00:10:30.311 "data_size": 63488 00:10:30.311 }, 00:10:30.311 { 00:10:30.311 "name": "BaseBdev3", 00:10:30.311 "uuid": "7c92a759-c2d3-4246-a1d6-c52017e6e423", 00:10:30.311 "is_configured": true, 00:10:30.311 "data_offset": 2048, 00:10:30.311 "data_size": 63488 00:10:30.311 } 00:10:30.311 ] 00:10:30.311 }' 00:10:30.311 20:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.311 20:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.569 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:30.569 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.569 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.569 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.569 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.569 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.569 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.827 [2024-10-17 20:07:16.257678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.827 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.827 [2024-10-17 20:07:16.398227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.827 [2024-10-17 20:07:16.398287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:31.085 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.085 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:31.085 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:31.085 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.085 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.085 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:31.085 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.085 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.085 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:31.085 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:31.085 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:31.085 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.086 BaseBdev2 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.086 [ 00:10:31.086 { 00:10:31.086 "name": "BaseBdev2", 00:10:31.086 "aliases": [ 00:10:31.086 "19178bd7-cdf2-4ea2-ad18-ffd9b632b01c" 00:10:31.086 ], 00:10:31.086 "product_name": "Malloc disk", 00:10:31.086 "block_size": 512, 00:10:31.086 "num_blocks": 65536, 00:10:31.086 "uuid": "19178bd7-cdf2-4ea2-ad18-ffd9b632b01c", 00:10:31.086 "assigned_rate_limits": { 00:10:31.086 "rw_ios_per_sec": 0, 00:10:31.086 "rw_mbytes_per_sec": 0, 00:10:31.086 "r_mbytes_per_sec": 0, 00:10:31.086 "w_mbytes_per_sec": 0 00:10:31.086 }, 00:10:31.086 "claimed": false, 00:10:31.086 "zoned": false, 00:10:31.086 "supported_io_types": { 00:10:31.086 "read": true, 00:10:31.086 "write": true, 00:10:31.086 "unmap": true, 00:10:31.086 "flush": true, 00:10:31.086 "reset": true, 00:10:31.086 "nvme_admin": false, 00:10:31.086 "nvme_io": false, 00:10:31.086 "nvme_io_md": false, 00:10:31.086 "write_zeroes": true, 00:10:31.086 "zcopy": true, 00:10:31.086 "get_zone_info": false, 00:10:31.086 "zone_management": false, 00:10:31.086 "zone_append": false, 00:10:31.086 "compare": false, 00:10:31.086 "compare_and_write": false, 00:10:31.086 "abort": true, 00:10:31.086 "seek_hole": false, 00:10:31.086 "seek_data": false, 00:10:31.086 "copy": true, 00:10:31.086 "nvme_iov_md": false 00:10:31.086 }, 00:10:31.086 "memory_domains": [ 00:10:31.086 { 00:10:31.086 "dma_device_id": "system", 00:10:31.086 "dma_device_type": 1 00:10:31.086 }, 00:10:31.086 { 00:10:31.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.086 "dma_device_type": 2 00:10:31.086 } 00:10:31.086 ], 00:10:31.086 "driver_specific": {} 00:10:31.086 } 00:10:31.086 ] 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.086 BaseBdev3 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.086 [ 00:10:31.086 { 00:10:31.086 "name": "BaseBdev3", 00:10:31.086 "aliases": [ 00:10:31.086 "4f119d5a-2c5b-48d1-8e42-a70a453404ed" 00:10:31.086 ], 00:10:31.086 "product_name": "Malloc disk", 00:10:31.086 "block_size": 512, 00:10:31.086 "num_blocks": 65536, 00:10:31.086 "uuid": "4f119d5a-2c5b-48d1-8e42-a70a453404ed", 00:10:31.086 "assigned_rate_limits": { 00:10:31.086 "rw_ios_per_sec": 0, 00:10:31.086 "rw_mbytes_per_sec": 0, 00:10:31.086 "r_mbytes_per_sec": 0, 00:10:31.086 "w_mbytes_per_sec": 0 00:10:31.086 }, 00:10:31.086 "claimed": false, 00:10:31.086 "zoned": false, 00:10:31.086 "supported_io_types": { 00:10:31.086 "read": true, 00:10:31.086 "write": true, 00:10:31.086 "unmap": true, 00:10:31.086 "flush": true, 00:10:31.086 "reset": true, 00:10:31.086 "nvme_admin": false, 00:10:31.086 "nvme_io": false, 00:10:31.086 "nvme_io_md": false, 00:10:31.086 "write_zeroes": true, 00:10:31.086 "zcopy": true, 00:10:31.086 "get_zone_info": false, 00:10:31.086 "zone_management": false, 00:10:31.086 "zone_append": false, 00:10:31.086 "compare": false, 00:10:31.086 "compare_and_write": false, 00:10:31.086 "abort": true, 00:10:31.086 "seek_hole": false, 00:10:31.086 "seek_data": false, 00:10:31.086 "copy": true, 00:10:31.086 "nvme_iov_md": false 00:10:31.086 }, 00:10:31.086 "memory_domains": [ 00:10:31.086 { 00:10:31.086 "dma_device_id": "system", 00:10:31.086 "dma_device_type": 1 00:10:31.086 }, 00:10:31.086 { 00:10:31.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.086 "dma_device_type": 2 00:10:31.086 } 00:10:31.086 ], 00:10:31.086 "driver_specific": {} 00:10:31.086 } 00:10:31.086 ] 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.086 [2024-10-17 20:07:16.691571] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.086 [2024-10-17 20:07:16.691638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.086 [2024-10-17 20:07:16.691685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.086 [2024-10-17 20:07:16.694759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.086 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.344 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.344 "name": "Existed_Raid", 00:10:31.344 "uuid": "066263be-1228-4a95-929f-d5788a7cb7b9", 00:10:31.344 "strip_size_kb": 64, 00:10:31.344 "state": "configuring", 00:10:31.344 "raid_level": "raid0", 00:10:31.344 "superblock": true, 00:10:31.344 "num_base_bdevs": 3, 00:10:31.344 "num_base_bdevs_discovered": 2, 00:10:31.344 "num_base_bdevs_operational": 3, 00:10:31.344 "base_bdevs_list": [ 00:10:31.344 { 00:10:31.344 "name": "BaseBdev1", 00:10:31.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.344 "is_configured": false, 00:10:31.344 "data_offset": 0, 00:10:31.344 "data_size": 0 00:10:31.344 }, 00:10:31.344 { 00:10:31.344 "name": "BaseBdev2", 00:10:31.344 "uuid": "19178bd7-cdf2-4ea2-ad18-ffd9b632b01c", 00:10:31.344 "is_configured": true, 00:10:31.344 "data_offset": 2048, 00:10:31.344 "data_size": 63488 00:10:31.344 }, 00:10:31.344 { 00:10:31.344 "name": "BaseBdev3", 00:10:31.344 "uuid": "4f119d5a-2c5b-48d1-8e42-a70a453404ed", 00:10:31.344 "is_configured": true, 00:10:31.344 "data_offset": 2048, 00:10:31.344 "data_size": 63488 00:10:31.344 } 00:10:31.344 ] 00:10:31.344 }' 00:10:31.344 20:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.344 20:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.908 [2024-10-17 20:07:17.291693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.908 "name": "Existed_Raid", 00:10:31.908 "uuid": "066263be-1228-4a95-929f-d5788a7cb7b9", 00:10:31.908 "strip_size_kb": 64, 00:10:31.908 "state": "configuring", 00:10:31.908 "raid_level": "raid0", 00:10:31.908 "superblock": true, 00:10:31.908 "num_base_bdevs": 3, 00:10:31.908 "num_base_bdevs_discovered": 1, 00:10:31.908 "num_base_bdevs_operational": 3, 00:10:31.908 "base_bdevs_list": [ 00:10:31.908 { 00:10:31.908 "name": "BaseBdev1", 00:10:31.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.908 "is_configured": false, 00:10:31.908 "data_offset": 0, 00:10:31.908 "data_size": 0 00:10:31.908 }, 00:10:31.908 { 00:10:31.908 "name": null, 00:10:31.908 "uuid": "19178bd7-cdf2-4ea2-ad18-ffd9b632b01c", 00:10:31.908 "is_configured": false, 00:10:31.908 "data_offset": 0, 00:10:31.908 "data_size": 63488 00:10:31.908 }, 00:10:31.908 { 00:10:31.908 "name": "BaseBdev3", 00:10:31.908 "uuid": "4f119d5a-2c5b-48d1-8e42-a70a453404ed", 00:10:31.908 "is_configured": true, 00:10:31.908 "data_offset": 2048, 00:10:31.908 "data_size": 63488 00:10:31.908 } 00:10:31.908 ] 00:10:31.908 }' 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.908 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.472 [2024-10-17 20:07:17.940916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.472 BaseBdev1 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.472 [ 00:10:32.472 { 00:10:32.472 "name": "BaseBdev1", 00:10:32.472 "aliases": [ 00:10:32.472 "072930e4-c527-4415-b076-6bb3a5921423" 00:10:32.472 ], 00:10:32.472 "product_name": "Malloc disk", 00:10:32.472 "block_size": 512, 00:10:32.472 "num_blocks": 65536, 00:10:32.472 "uuid": "072930e4-c527-4415-b076-6bb3a5921423", 00:10:32.472 "assigned_rate_limits": { 00:10:32.472 "rw_ios_per_sec": 0, 00:10:32.472 "rw_mbytes_per_sec": 0, 00:10:32.472 "r_mbytes_per_sec": 0, 00:10:32.472 "w_mbytes_per_sec": 0 00:10:32.472 }, 00:10:32.472 "claimed": true, 00:10:32.472 "claim_type": "exclusive_write", 00:10:32.472 "zoned": false, 00:10:32.472 "supported_io_types": { 00:10:32.472 "read": true, 00:10:32.472 "write": true, 00:10:32.472 "unmap": true, 00:10:32.472 "flush": true, 00:10:32.472 "reset": true, 00:10:32.472 "nvme_admin": false, 00:10:32.472 "nvme_io": false, 00:10:32.472 "nvme_io_md": false, 00:10:32.472 "write_zeroes": true, 00:10:32.472 "zcopy": true, 00:10:32.472 "get_zone_info": false, 00:10:32.472 "zone_management": false, 00:10:32.472 "zone_append": false, 00:10:32.472 "compare": false, 00:10:32.472 "compare_and_write": false, 00:10:32.472 "abort": true, 00:10:32.472 "seek_hole": false, 00:10:32.472 "seek_data": false, 00:10:32.472 "copy": true, 00:10:32.472 "nvme_iov_md": false 00:10:32.472 }, 00:10:32.472 "memory_domains": [ 00:10:32.472 { 00:10:32.472 "dma_device_id": "system", 00:10:32.472 "dma_device_type": 1 00:10:32.472 }, 00:10:32.472 { 00:10:32.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.472 "dma_device_type": 2 00:10:32.472 } 00:10:32.472 ], 00:10:32.472 "driver_specific": {} 00:10:32.472 } 00:10:32.472 ] 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.472 20:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.472 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.472 "name": "Existed_Raid", 00:10:32.472 "uuid": "066263be-1228-4a95-929f-d5788a7cb7b9", 00:10:32.472 "strip_size_kb": 64, 00:10:32.472 "state": "configuring", 00:10:32.472 "raid_level": "raid0", 00:10:32.472 "superblock": true, 00:10:32.472 "num_base_bdevs": 3, 00:10:32.472 "num_base_bdevs_discovered": 2, 00:10:32.472 "num_base_bdevs_operational": 3, 00:10:32.472 "base_bdevs_list": [ 00:10:32.472 { 00:10:32.472 "name": "BaseBdev1", 00:10:32.472 "uuid": "072930e4-c527-4415-b076-6bb3a5921423", 00:10:32.472 "is_configured": true, 00:10:32.472 "data_offset": 2048, 00:10:32.472 "data_size": 63488 00:10:32.472 }, 00:10:32.472 { 00:10:32.472 "name": null, 00:10:32.472 "uuid": "19178bd7-cdf2-4ea2-ad18-ffd9b632b01c", 00:10:32.472 "is_configured": false, 00:10:32.472 "data_offset": 0, 00:10:32.472 "data_size": 63488 00:10:32.472 }, 00:10:32.472 { 00:10:32.472 "name": "BaseBdev3", 00:10:32.472 "uuid": "4f119d5a-2c5b-48d1-8e42-a70a453404ed", 00:10:32.472 "is_configured": true, 00:10:32.472 "data_offset": 2048, 00:10:32.472 "data_size": 63488 00:10:32.472 } 00:10:32.472 ] 00:10:32.472 }' 00:10:32.472 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.472 20:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.105 [2024-10-17 20:07:18.569198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.105 "name": "Existed_Raid", 00:10:33.105 "uuid": "066263be-1228-4a95-929f-d5788a7cb7b9", 00:10:33.105 "strip_size_kb": 64, 00:10:33.105 "state": "configuring", 00:10:33.105 "raid_level": "raid0", 00:10:33.105 "superblock": true, 00:10:33.105 "num_base_bdevs": 3, 00:10:33.105 "num_base_bdevs_discovered": 1, 00:10:33.105 "num_base_bdevs_operational": 3, 00:10:33.105 "base_bdevs_list": [ 00:10:33.105 { 00:10:33.105 "name": "BaseBdev1", 00:10:33.105 "uuid": "072930e4-c527-4415-b076-6bb3a5921423", 00:10:33.105 "is_configured": true, 00:10:33.105 "data_offset": 2048, 00:10:33.105 "data_size": 63488 00:10:33.105 }, 00:10:33.105 { 00:10:33.105 "name": null, 00:10:33.105 "uuid": "19178bd7-cdf2-4ea2-ad18-ffd9b632b01c", 00:10:33.105 "is_configured": false, 00:10:33.105 "data_offset": 0, 00:10:33.105 "data_size": 63488 00:10:33.105 }, 00:10:33.105 { 00:10:33.105 "name": null, 00:10:33.105 "uuid": "4f119d5a-2c5b-48d1-8e42-a70a453404ed", 00:10:33.105 "is_configured": false, 00:10:33.105 "data_offset": 0, 00:10:33.105 "data_size": 63488 00:10:33.105 } 00:10:33.105 ] 00:10:33.105 }' 00:10:33.105 20:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.106 20:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.671 [2024-10-17 20:07:19.157449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.671 "name": "Existed_Raid", 00:10:33.671 "uuid": "066263be-1228-4a95-929f-d5788a7cb7b9", 00:10:33.671 "strip_size_kb": 64, 00:10:33.671 "state": "configuring", 00:10:33.671 "raid_level": "raid0", 00:10:33.671 "superblock": true, 00:10:33.671 "num_base_bdevs": 3, 00:10:33.671 "num_base_bdevs_discovered": 2, 00:10:33.671 "num_base_bdevs_operational": 3, 00:10:33.671 "base_bdevs_list": [ 00:10:33.671 { 00:10:33.671 "name": "BaseBdev1", 00:10:33.671 "uuid": "072930e4-c527-4415-b076-6bb3a5921423", 00:10:33.671 "is_configured": true, 00:10:33.671 "data_offset": 2048, 00:10:33.671 "data_size": 63488 00:10:33.671 }, 00:10:33.671 { 00:10:33.671 "name": null, 00:10:33.671 "uuid": "19178bd7-cdf2-4ea2-ad18-ffd9b632b01c", 00:10:33.671 "is_configured": false, 00:10:33.671 "data_offset": 0, 00:10:33.671 "data_size": 63488 00:10:33.671 }, 00:10:33.671 { 00:10:33.671 "name": "BaseBdev3", 00:10:33.671 "uuid": "4f119d5a-2c5b-48d1-8e42-a70a453404ed", 00:10:33.671 "is_configured": true, 00:10:33.671 "data_offset": 2048, 00:10:33.671 "data_size": 63488 00:10:33.671 } 00:10:33.671 ] 00:10:33.671 }' 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.671 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.237 [2024-10-17 20:07:19.709813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.237 "name": "Existed_Raid", 00:10:34.237 "uuid": "066263be-1228-4a95-929f-d5788a7cb7b9", 00:10:34.237 "strip_size_kb": 64, 00:10:34.237 "state": "configuring", 00:10:34.237 "raid_level": "raid0", 00:10:34.237 "superblock": true, 00:10:34.237 "num_base_bdevs": 3, 00:10:34.237 "num_base_bdevs_discovered": 1, 00:10:34.237 "num_base_bdevs_operational": 3, 00:10:34.237 "base_bdevs_list": [ 00:10:34.237 { 00:10:34.237 "name": null, 00:10:34.237 "uuid": "072930e4-c527-4415-b076-6bb3a5921423", 00:10:34.237 "is_configured": false, 00:10:34.237 "data_offset": 0, 00:10:34.237 "data_size": 63488 00:10:34.237 }, 00:10:34.237 { 00:10:34.237 "name": null, 00:10:34.237 "uuid": "19178bd7-cdf2-4ea2-ad18-ffd9b632b01c", 00:10:34.237 "is_configured": false, 00:10:34.237 "data_offset": 0, 00:10:34.237 "data_size": 63488 00:10:34.237 }, 00:10:34.237 { 00:10:34.237 "name": "BaseBdev3", 00:10:34.237 "uuid": "4f119d5a-2c5b-48d1-8e42-a70a453404ed", 00:10:34.237 "is_configured": true, 00:10:34.237 "data_offset": 2048, 00:10:34.237 "data_size": 63488 00:10:34.237 } 00:10:34.237 ] 00:10:34.237 }' 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.237 20:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.803 [2024-10-17 20:07:20.366107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.803 "name": "Existed_Raid", 00:10:34.803 "uuid": "066263be-1228-4a95-929f-d5788a7cb7b9", 00:10:34.803 "strip_size_kb": 64, 00:10:34.803 "state": "configuring", 00:10:34.803 "raid_level": "raid0", 00:10:34.803 "superblock": true, 00:10:34.803 "num_base_bdevs": 3, 00:10:34.803 "num_base_bdevs_discovered": 2, 00:10:34.803 "num_base_bdevs_operational": 3, 00:10:34.803 "base_bdevs_list": [ 00:10:34.803 { 00:10:34.803 "name": null, 00:10:34.803 "uuid": "072930e4-c527-4415-b076-6bb3a5921423", 00:10:34.803 "is_configured": false, 00:10:34.803 "data_offset": 0, 00:10:34.803 "data_size": 63488 00:10:34.803 }, 00:10:34.803 { 00:10:34.803 "name": "BaseBdev2", 00:10:34.803 "uuid": "19178bd7-cdf2-4ea2-ad18-ffd9b632b01c", 00:10:34.803 "is_configured": true, 00:10:34.803 "data_offset": 2048, 00:10:34.803 "data_size": 63488 00:10:34.803 }, 00:10:34.803 { 00:10:34.803 "name": "BaseBdev3", 00:10:34.803 "uuid": "4f119d5a-2c5b-48d1-8e42-a70a453404ed", 00:10:34.803 "is_configured": true, 00:10:34.803 "data_offset": 2048, 00:10:34.803 "data_size": 63488 00:10:34.803 } 00:10:34.803 ] 00:10:34.803 }' 00:10:34.803 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.804 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 072930e4-c527-4415-b076-6bb3a5921423 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.368 20:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.670 [2024-10-17 20:07:21.032354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:35.670 [2024-10-17 20:07:21.032931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:35.670 [2024-10-17 20:07:21.032966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:35.670 NewBaseBdev 00:10:35.670 [2024-10-17 20:07:21.033313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:35.670 [2024-10-17 20:07:21.033501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:35.670 [2024-10-17 20:07:21.033519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:35.670 [2024-10-17 20:07:21.033693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.670 [ 00:10:35.670 { 00:10:35.670 "name": "NewBaseBdev", 00:10:35.670 "aliases": [ 00:10:35.670 "072930e4-c527-4415-b076-6bb3a5921423" 00:10:35.670 ], 00:10:35.670 "product_name": "Malloc disk", 00:10:35.670 "block_size": 512, 00:10:35.670 "num_blocks": 65536, 00:10:35.670 "uuid": "072930e4-c527-4415-b076-6bb3a5921423", 00:10:35.670 "assigned_rate_limits": { 00:10:35.670 "rw_ios_per_sec": 0, 00:10:35.670 "rw_mbytes_per_sec": 0, 00:10:35.670 "r_mbytes_per_sec": 0, 00:10:35.670 "w_mbytes_per_sec": 0 00:10:35.670 }, 00:10:35.670 "claimed": true, 00:10:35.670 "claim_type": "exclusive_write", 00:10:35.670 "zoned": false, 00:10:35.670 "supported_io_types": { 00:10:35.670 "read": true, 00:10:35.670 "write": true, 00:10:35.670 "unmap": true, 00:10:35.670 "flush": true, 00:10:35.670 "reset": true, 00:10:35.670 "nvme_admin": false, 00:10:35.670 "nvme_io": false, 00:10:35.670 "nvme_io_md": false, 00:10:35.670 "write_zeroes": true, 00:10:35.670 "zcopy": true, 00:10:35.670 "get_zone_info": false, 00:10:35.670 "zone_management": false, 00:10:35.670 "zone_append": false, 00:10:35.670 "compare": false, 00:10:35.670 "compare_and_write": false, 00:10:35.670 "abort": true, 00:10:35.670 "seek_hole": false, 00:10:35.670 "seek_data": false, 00:10:35.670 "copy": true, 00:10:35.670 "nvme_iov_md": false 00:10:35.670 }, 00:10:35.670 "memory_domains": [ 00:10:35.670 { 00:10:35.670 "dma_device_id": "system", 00:10:35.670 "dma_device_type": 1 00:10:35.670 }, 00:10:35.670 { 00:10:35.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.670 "dma_device_type": 2 00:10:35.670 } 00:10:35.670 ], 00:10:35.670 "driver_specific": {} 00:10:35.670 } 00:10:35.670 ] 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.670 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.671 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.671 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.671 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.671 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.671 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.671 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.671 "name": "Existed_Raid", 00:10:35.671 "uuid": "066263be-1228-4a95-929f-d5788a7cb7b9", 00:10:35.671 "strip_size_kb": 64, 00:10:35.671 "state": "online", 00:10:35.671 "raid_level": "raid0", 00:10:35.671 "superblock": true, 00:10:35.671 "num_base_bdevs": 3, 00:10:35.671 "num_base_bdevs_discovered": 3, 00:10:35.671 "num_base_bdevs_operational": 3, 00:10:35.671 "base_bdevs_list": [ 00:10:35.671 { 00:10:35.671 "name": "NewBaseBdev", 00:10:35.671 "uuid": "072930e4-c527-4415-b076-6bb3a5921423", 00:10:35.671 "is_configured": true, 00:10:35.671 "data_offset": 2048, 00:10:35.671 "data_size": 63488 00:10:35.671 }, 00:10:35.671 { 00:10:35.671 "name": "BaseBdev2", 00:10:35.671 "uuid": "19178bd7-cdf2-4ea2-ad18-ffd9b632b01c", 00:10:35.671 "is_configured": true, 00:10:35.671 "data_offset": 2048, 00:10:35.671 "data_size": 63488 00:10:35.671 }, 00:10:35.671 { 00:10:35.671 "name": "BaseBdev3", 00:10:35.671 "uuid": "4f119d5a-2c5b-48d1-8e42-a70a453404ed", 00:10:35.671 "is_configured": true, 00:10:35.671 "data_offset": 2048, 00:10:35.671 "data_size": 63488 00:10:35.671 } 00:10:35.671 ] 00:10:35.671 }' 00:10:35.671 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.671 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.249 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.249 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:36.249 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.249 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.249 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.249 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.249 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.249 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:36.249 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.249 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.249 [2024-10-17 20:07:21.608936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.249 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.249 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.249 "name": "Existed_Raid", 00:10:36.249 "aliases": [ 00:10:36.249 "066263be-1228-4a95-929f-d5788a7cb7b9" 00:10:36.249 ], 00:10:36.249 "product_name": "Raid Volume", 00:10:36.249 "block_size": 512, 00:10:36.249 "num_blocks": 190464, 00:10:36.249 "uuid": "066263be-1228-4a95-929f-d5788a7cb7b9", 00:10:36.249 "assigned_rate_limits": { 00:10:36.249 "rw_ios_per_sec": 0, 00:10:36.249 "rw_mbytes_per_sec": 0, 00:10:36.249 "r_mbytes_per_sec": 0, 00:10:36.249 "w_mbytes_per_sec": 0 00:10:36.249 }, 00:10:36.249 "claimed": false, 00:10:36.249 "zoned": false, 00:10:36.249 "supported_io_types": { 00:10:36.249 "read": true, 00:10:36.249 "write": true, 00:10:36.249 "unmap": true, 00:10:36.249 "flush": true, 00:10:36.249 "reset": true, 00:10:36.249 "nvme_admin": false, 00:10:36.249 "nvme_io": false, 00:10:36.249 "nvme_io_md": false, 00:10:36.249 "write_zeroes": true, 00:10:36.249 "zcopy": false, 00:10:36.249 "get_zone_info": false, 00:10:36.249 "zone_management": false, 00:10:36.249 "zone_append": false, 00:10:36.249 "compare": false, 00:10:36.249 "compare_and_write": false, 00:10:36.249 "abort": false, 00:10:36.249 "seek_hole": false, 00:10:36.249 "seek_data": false, 00:10:36.249 "copy": false, 00:10:36.249 "nvme_iov_md": false 00:10:36.249 }, 00:10:36.249 "memory_domains": [ 00:10:36.249 { 00:10:36.249 "dma_device_id": "system", 00:10:36.249 "dma_device_type": 1 00:10:36.249 }, 00:10:36.249 { 00:10:36.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.249 "dma_device_type": 2 00:10:36.249 }, 00:10:36.249 { 00:10:36.249 "dma_device_id": "system", 00:10:36.249 "dma_device_type": 1 00:10:36.249 }, 00:10:36.249 { 00:10:36.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.249 "dma_device_type": 2 00:10:36.249 }, 00:10:36.249 { 00:10:36.249 "dma_device_id": "system", 00:10:36.249 "dma_device_type": 1 00:10:36.249 }, 00:10:36.249 { 00:10:36.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.249 "dma_device_type": 2 00:10:36.249 } 00:10:36.249 ], 00:10:36.249 "driver_specific": { 00:10:36.249 "raid": { 00:10:36.249 "uuid": "066263be-1228-4a95-929f-d5788a7cb7b9", 00:10:36.249 "strip_size_kb": 64, 00:10:36.249 "state": "online", 00:10:36.249 "raid_level": "raid0", 00:10:36.249 "superblock": true, 00:10:36.249 "num_base_bdevs": 3, 00:10:36.249 "num_base_bdevs_discovered": 3, 00:10:36.249 "num_base_bdevs_operational": 3, 00:10:36.249 "base_bdevs_list": [ 00:10:36.249 { 00:10:36.249 "name": "NewBaseBdev", 00:10:36.249 "uuid": "072930e4-c527-4415-b076-6bb3a5921423", 00:10:36.249 "is_configured": true, 00:10:36.249 "data_offset": 2048, 00:10:36.249 "data_size": 63488 00:10:36.249 }, 00:10:36.249 { 00:10:36.249 "name": "BaseBdev2", 00:10:36.249 "uuid": "19178bd7-cdf2-4ea2-ad18-ffd9b632b01c", 00:10:36.250 "is_configured": true, 00:10:36.250 "data_offset": 2048, 00:10:36.250 "data_size": 63488 00:10:36.250 }, 00:10:36.250 { 00:10:36.250 "name": "BaseBdev3", 00:10:36.250 "uuid": "4f119d5a-2c5b-48d1-8e42-a70a453404ed", 00:10:36.250 "is_configured": true, 00:10:36.250 "data_offset": 2048, 00:10:36.250 "data_size": 63488 00:10:36.250 } 00:10:36.250 ] 00:10:36.250 } 00:10:36.250 } 00:10:36.250 }' 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:36.250 BaseBdev2 00:10:36.250 BaseBdev3' 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.250 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.507 [2024-10-17 20:07:21.924627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.507 [2024-10-17 20:07:21.924807] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.507 [2024-10-17 20:07:21.924924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.507 [2024-10-17 20:07:21.925014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.507 [2024-10-17 20:07:21.925038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64344 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64344 ']' 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64344 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64344 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:36.507 killing process with pid 64344 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64344' 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64344 00:10:36.507 [2024-10-17 20:07:21.964230] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.507 20:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64344 00:10:36.766 [2024-10-17 20:07:22.222642] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.700 20:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:37.700 00:10:37.700 real 0m12.019s 00:10:37.700 user 0m19.985s 00:10:37.700 sys 0m1.679s 00:10:37.700 ************************************ 00:10:37.700 END TEST raid_state_function_test_sb 00:10:37.700 ************************************ 00:10:37.700 20:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.700 20:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.700 20:07:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:37.700 20:07:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:37.700 20:07:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.700 20:07:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.700 ************************************ 00:10:37.700 START TEST raid_superblock_test 00:10:37.700 ************************************ 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64981 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64981 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 64981 ']' 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.700 20:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.959 [2024-10-17 20:07:23.413230] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:10:37.959 [2024-10-17 20:07:23.413559] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64981 ] 00:10:37.959 [2024-10-17 20:07:23.576795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.217 [2024-10-17 20:07:23.707687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.475 [2024-10-17 20:07:23.904277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.475 [2024-10-17 20:07:23.904512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.042 malloc1 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.042 [2024-10-17 20:07:24.496068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:39.042 [2024-10-17 20:07:24.496164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.042 [2024-10-17 20:07:24.496205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:39.042 [2024-10-17 20:07:24.496222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.042 [2024-10-17 20:07:24.499118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.042 [2024-10-17 20:07:24.499163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:39.042 pt1 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.042 malloc2 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.042 [2024-10-17 20:07:24.550872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:39.042 [2024-10-17 20:07:24.550963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.042 [2024-10-17 20:07:24.550999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:39.042 [2024-10-17 20:07:24.551067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.042 [2024-10-17 20:07:24.553969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.042 [2024-10-17 20:07:24.554059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:39.042 pt2 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.042 malloc3 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.042 [2024-10-17 20:07:24.618776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:39.042 [2024-10-17 20:07:24.618895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.042 [2024-10-17 20:07:24.618938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:39.042 [2024-10-17 20:07:24.618954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.042 [2024-10-17 20:07:24.622125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.042 [2024-10-17 20:07:24.622170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:39.042 pt3 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.042 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.042 [2024-10-17 20:07:24.630912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:39.042 [2024-10-17 20:07:24.633580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:39.042 [2024-10-17 20:07:24.633681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:39.042 [2024-10-17 20:07:24.633915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:39.042 [2024-10-17 20:07:24.633953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:39.042 [2024-10-17 20:07:24.634412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:39.042 [2024-10-17 20:07:24.634663] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:39.042 [2024-10-17 20:07:24.634681] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:39.043 [2024-10-17 20:07:24.635010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.043 "name": "raid_bdev1", 00:10:39.043 "uuid": "75a0f84e-62a8-4eb6-8ebc-938bddac0a6f", 00:10:39.043 "strip_size_kb": 64, 00:10:39.043 "state": "online", 00:10:39.043 "raid_level": "raid0", 00:10:39.043 "superblock": true, 00:10:39.043 "num_base_bdevs": 3, 00:10:39.043 "num_base_bdevs_discovered": 3, 00:10:39.043 "num_base_bdevs_operational": 3, 00:10:39.043 "base_bdevs_list": [ 00:10:39.043 { 00:10:39.043 "name": "pt1", 00:10:39.043 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.043 "is_configured": true, 00:10:39.043 "data_offset": 2048, 00:10:39.043 "data_size": 63488 00:10:39.043 }, 00:10:39.043 { 00:10:39.043 "name": "pt2", 00:10:39.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.043 "is_configured": true, 00:10:39.043 "data_offset": 2048, 00:10:39.043 "data_size": 63488 00:10:39.043 }, 00:10:39.043 { 00:10:39.043 "name": "pt3", 00:10:39.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.043 "is_configured": true, 00:10:39.043 "data_offset": 2048, 00:10:39.043 "data_size": 63488 00:10:39.043 } 00:10:39.043 ] 00:10:39.043 }' 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.043 20:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.609 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:39.609 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:39.609 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.609 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.609 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.609 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.609 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:39.609 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.609 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.609 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.609 [2024-10-17 20:07:25.171503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.609 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.609 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.609 "name": "raid_bdev1", 00:10:39.609 "aliases": [ 00:10:39.609 "75a0f84e-62a8-4eb6-8ebc-938bddac0a6f" 00:10:39.609 ], 00:10:39.609 "product_name": "Raid Volume", 00:10:39.609 "block_size": 512, 00:10:39.609 "num_blocks": 190464, 00:10:39.609 "uuid": "75a0f84e-62a8-4eb6-8ebc-938bddac0a6f", 00:10:39.609 "assigned_rate_limits": { 00:10:39.609 "rw_ios_per_sec": 0, 00:10:39.609 "rw_mbytes_per_sec": 0, 00:10:39.609 "r_mbytes_per_sec": 0, 00:10:39.609 "w_mbytes_per_sec": 0 00:10:39.609 }, 00:10:39.609 "claimed": false, 00:10:39.609 "zoned": false, 00:10:39.609 "supported_io_types": { 00:10:39.609 "read": true, 00:10:39.609 "write": true, 00:10:39.609 "unmap": true, 00:10:39.609 "flush": true, 00:10:39.609 "reset": true, 00:10:39.609 "nvme_admin": false, 00:10:39.609 "nvme_io": false, 00:10:39.609 "nvme_io_md": false, 00:10:39.609 "write_zeroes": true, 00:10:39.609 "zcopy": false, 00:10:39.609 "get_zone_info": false, 00:10:39.609 "zone_management": false, 00:10:39.609 "zone_append": false, 00:10:39.609 "compare": false, 00:10:39.609 "compare_and_write": false, 00:10:39.609 "abort": false, 00:10:39.609 "seek_hole": false, 00:10:39.609 "seek_data": false, 00:10:39.609 "copy": false, 00:10:39.609 "nvme_iov_md": false 00:10:39.609 }, 00:10:39.609 "memory_domains": [ 00:10:39.610 { 00:10:39.610 "dma_device_id": "system", 00:10:39.610 "dma_device_type": 1 00:10:39.610 }, 00:10:39.610 { 00:10:39.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.610 "dma_device_type": 2 00:10:39.610 }, 00:10:39.610 { 00:10:39.610 "dma_device_id": "system", 00:10:39.610 "dma_device_type": 1 00:10:39.610 }, 00:10:39.610 { 00:10:39.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.610 "dma_device_type": 2 00:10:39.610 }, 00:10:39.610 { 00:10:39.610 "dma_device_id": "system", 00:10:39.610 "dma_device_type": 1 00:10:39.610 }, 00:10:39.610 { 00:10:39.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.610 "dma_device_type": 2 00:10:39.610 } 00:10:39.610 ], 00:10:39.610 "driver_specific": { 00:10:39.610 "raid": { 00:10:39.610 "uuid": "75a0f84e-62a8-4eb6-8ebc-938bddac0a6f", 00:10:39.610 "strip_size_kb": 64, 00:10:39.610 "state": "online", 00:10:39.610 "raid_level": "raid0", 00:10:39.610 "superblock": true, 00:10:39.610 "num_base_bdevs": 3, 00:10:39.610 "num_base_bdevs_discovered": 3, 00:10:39.610 "num_base_bdevs_operational": 3, 00:10:39.610 "base_bdevs_list": [ 00:10:39.610 { 00:10:39.610 "name": "pt1", 00:10:39.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.610 "is_configured": true, 00:10:39.610 "data_offset": 2048, 00:10:39.610 "data_size": 63488 00:10:39.610 }, 00:10:39.610 { 00:10:39.610 "name": "pt2", 00:10:39.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.610 "is_configured": true, 00:10:39.610 "data_offset": 2048, 00:10:39.610 "data_size": 63488 00:10:39.610 }, 00:10:39.610 { 00:10:39.610 "name": "pt3", 00:10:39.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.610 "is_configured": true, 00:10:39.610 "data_offset": 2048, 00:10:39.610 "data_size": 63488 00:10:39.610 } 00:10:39.610 ] 00:10:39.610 } 00:10:39.610 } 00:10:39.610 }' 00:10:39.610 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:39.868 pt2 00:10:39.868 pt3' 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.868 [2024-10-17 20:07:25.475524] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=75a0f84e-62a8-4eb6-8ebc-938bddac0a6f 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 75a0f84e-62a8-4eb6-8ebc-938bddac0a6f ']' 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.868 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.126 [2024-10-17 20:07:25.523132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.126 [2024-10-17 20:07:25.523166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.126 [2024-10-17 20:07:25.523258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.126 [2024-10-17 20:07:25.523339] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.126 [2024-10-17 20:07:25.523356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:40.126 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.126 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.126 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:40.126 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.126 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.126 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.126 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:40.126 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:40.126 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:40.126 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:40.126 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.127 [2024-10-17 20:07:25.671273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:40.127 [2024-10-17 20:07:25.673897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:40.127 [2024-10-17 20:07:25.674137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:40.127 [2024-10-17 20:07:25.674225] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:40.127 [2024-10-17 20:07:25.674300] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:40.127 [2024-10-17 20:07:25.674338] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:40.127 [2024-10-17 20:07:25.674369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.127 [2024-10-17 20:07:25.674383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:40.127 request: 00:10:40.127 { 00:10:40.127 "name": "raid_bdev1", 00:10:40.127 "raid_level": "raid0", 00:10:40.127 "base_bdevs": [ 00:10:40.127 "malloc1", 00:10:40.127 "malloc2", 00:10:40.127 "malloc3" 00:10:40.127 ], 00:10:40.127 "strip_size_kb": 64, 00:10:40.127 "superblock": false, 00:10:40.127 "method": "bdev_raid_create", 00:10:40.127 "req_id": 1 00:10:40.127 } 00:10:40.127 Got JSON-RPC error response 00:10:40.127 response: 00:10:40.127 { 00:10:40.127 "code": -17, 00:10:40.127 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:40.127 } 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.127 [2024-10-17 20:07:25.739211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:40.127 [2024-10-17 20:07:25.739415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.127 [2024-10-17 20:07:25.739495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:40.127 [2024-10-17 20:07:25.739663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.127 [2024-10-17 20:07:25.742590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.127 [2024-10-17 20:07:25.742646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:40.127 [2024-10-17 20:07:25.742753] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:40.127 [2024-10-17 20:07:25.742823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:40.127 pt1 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.127 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.385 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.385 "name": "raid_bdev1", 00:10:40.385 "uuid": "75a0f84e-62a8-4eb6-8ebc-938bddac0a6f", 00:10:40.385 "strip_size_kb": 64, 00:10:40.385 "state": "configuring", 00:10:40.385 "raid_level": "raid0", 00:10:40.385 "superblock": true, 00:10:40.385 "num_base_bdevs": 3, 00:10:40.385 "num_base_bdevs_discovered": 1, 00:10:40.385 "num_base_bdevs_operational": 3, 00:10:40.385 "base_bdevs_list": [ 00:10:40.385 { 00:10:40.385 "name": "pt1", 00:10:40.385 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:40.385 "is_configured": true, 00:10:40.385 "data_offset": 2048, 00:10:40.385 "data_size": 63488 00:10:40.385 }, 00:10:40.385 { 00:10:40.385 "name": null, 00:10:40.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.385 "is_configured": false, 00:10:40.385 "data_offset": 2048, 00:10:40.385 "data_size": 63488 00:10:40.385 }, 00:10:40.385 { 00:10:40.385 "name": null, 00:10:40.385 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.385 "is_configured": false, 00:10:40.385 "data_offset": 2048, 00:10:40.385 "data_size": 63488 00:10:40.385 } 00:10:40.385 ] 00:10:40.385 }' 00:10:40.385 20:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.385 20:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.643 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:40.643 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:40.643 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.643 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.643 [2024-10-17 20:07:26.275435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:40.643 [2024-10-17 20:07:26.275671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.644 [2024-10-17 20:07:26.275717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:40.644 [2024-10-17 20:07:26.275744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.644 [2024-10-17 20:07:26.276398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.644 [2024-10-17 20:07:26.276458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:40.644 [2024-10-17 20:07:26.276572] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:40.644 [2024-10-17 20:07:26.276603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:40.644 pt2 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.644 [2024-10-17 20:07:26.283380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.644 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.902 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.902 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.902 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.902 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.902 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.902 "name": "raid_bdev1", 00:10:40.902 "uuid": "75a0f84e-62a8-4eb6-8ebc-938bddac0a6f", 00:10:40.902 "strip_size_kb": 64, 00:10:40.902 "state": "configuring", 00:10:40.902 "raid_level": "raid0", 00:10:40.902 "superblock": true, 00:10:40.902 "num_base_bdevs": 3, 00:10:40.902 "num_base_bdevs_discovered": 1, 00:10:40.902 "num_base_bdevs_operational": 3, 00:10:40.902 "base_bdevs_list": [ 00:10:40.902 { 00:10:40.902 "name": "pt1", 00:10:40.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:40.902 "is_configured": true, 00:10:40.902 "data_offset": 2048, 00:10:40.902 "data_size": 63488 00:10:40.902 }, 00:10:40.902 { 00:10:40.902 "name": null, 00:10:40.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.902 "is_configured": false, 00:10:40.902 "data_offset": 0, 00:10:40.902 "data_size": 63488 00:10:40.902 }, 00:10:40.902 { 00:10:40.902 "name": null, 00:10:40.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.902 "is_configured": false, 00:10:40.902 "data_offset": 2048, 00:10:40.902 "data_size": 63488 00:10:40.902 } 00:10:40.902 ] 00:10:40.902 }' 00:10:40.902 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.902 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.161 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:41.161 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:41.161 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:41.161 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.161 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.419 [2024-10-17 20:07:26.815535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:41.419 [2024-10-17 20:07:26.815635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.419 [2024-10-17 20:07:26.815662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:41.419 [2024-10-17 20:07:26.815678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.419 [2024-10-17 20:07:26.816295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.419 [2024-10-17 20:07:26.816327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:41.419 [2024-10-17 20:07:26.816446] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:41.419 [2024-10-17 20:07:26.816520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:41.419 pt2 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.420 [2024-10-17 20:07:26.823520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:41.420 [2024-10-17 20:07:26.823590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.420 [2024-10-17 20:07:26.823611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:41.420 [2024-10-17 20:07:26.823625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.420 [2024-10-17 20:07:26.824079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.420 [2024-10-17 20:07:26.824117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:41.420 [2024-10-17 20:07:26.824235] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:41.420 [2024-10-17 20:07:26.824268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:41.420 [2024-10-17 20:07:26.824412] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:41.420 [2024-10-17 20:07:26.824433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:41.420 [2024-10-17 20:07:26.824795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:41.420 [2024-10-17 20:07:26.824979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:41.420 [2024-10-17 20:07:26.824993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:41.420 [2024-10-17 20:07:26.825185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.420 pt3 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.420 "name": "raid_bdev1", 00:10:41.420 "uuid": "75a0f84e-62a8-4eb6-8ebc-938bddac0a6f", 00:10:41.420 "strip_size_kb": 64, 00:10:41.420 "state": "online", 00:10:41.420 "raid_level": "raid0", 00:10:41.420 "superblock": true, 00:10:41.420 "num_base_bdevs": 3, 00:10:41.420 "num_base_bdevs_discovered": 3, 00:10:41.420 "num_base_bdevs_operational": 3, 00:10:41.420 "base_bdevs_list": [ 00:10:41.420 { 00:10:41.420 "name": "pt1", 00:10:41.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:41.420 "is_configured": true, 00:10:41.420 "data_offset": 2048, 00:10:41.420 "data_size": 63488 00:10:41.420 }, 00:10:41.420 { 00:10:41.420 "name": "pt2", 00:10:41.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.420 "is_configured": true, 00:10:41.420 "data_offset": 2048, 00:10:41.420 "data_size": 63488 00:10:41.420 }, 00:10:41.420 { 00:10:41.420 "name": "pt3", 00:10:41.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:41.420 "is_configured": true, 00:10:41.420 "data_offset": 2048, 00:10:41.420 "data_size": 63488 00:10:41.420 } 00:10:41.420 ] 00:10:41.420 }' 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.420 20:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.987 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:41.987 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:41.987 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.987 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.987 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.987 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.987 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:41.987 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.987 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.987 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.987 [2024-10-17 20:07:27.348089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.987 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.987 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.987 "name": "raid_bdev1", 00:10:41.987 "aliases": [ 00:10:41.987 "75a0f84e-62a8-4eb6-8ebc-938bddac0a6f" 00:10:41.987 ], 00:10:41.987 "product_name": "Raid Volume", 00:10:41.987 "block_size": 512, 00:10:41.987 "num_blocks": 190464, 00:10:41.987 "uuid": "75a0f84e-62a8-4eb6-8ebc-938bddac0a6f", 00:10:41.987 "assigned_rate_limits": { 00:10:41.987 "rw_ios_per_sec": 0, 00:10:41.987 "rw_mbytes_per_sec": 0, 00:10:41.987 "r_mbytes_per_sec": 0, 00:10:41.987 "w_mbytes_per_sec": 0 00:10:41.987 }, 00:10:41.987 "claimed": false, 00:10:41.987 "zoned": false, 00:10:41.987 "supported_io_types": { 00:10:41.987 "read": true, 00:10:41.987 "write": true, 00:10:41.987 "unmap": true, 00:10:41.987 "flush": true, 00:10:41.987 "reset": true, 00:10:41.987 "nvme_admin": false, 00:10:41.987 "nvme_io": false, 00:10:41.987 "nvme_io_md": false, 00:10:41.987 "write_zeroes": true, 00:10:41.987 "zcopy": false, 00:10:41.987 "get_zone_info": false, 00:10:41.987 "zone_management": false, 00:10:41.987 "zone_append": false, 00:10:41.987 "compare": false, 00:10:41.987 "compare_and_write": false, 00:10:41.987 "abort": false, 00:10:41.987 "seek_hole": false, 00:10:41.987 "seek_data": false, 00:10:41.987 "copy": false, 00:10:41.987 "nvme_iov_md": false 00:10:41.987 }, 00:10:41.987 "memory_domains": [ 00:10:41.987 { 00:10:41.987 "dma_device_id": "system", 00:10:41.987 "dma_device_type": 1 00:10:41.987 }, 00:10:41.987 { 00:10:41.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.987 "dma_device_type": 2 00:10:41.987 }, 00:10:41.987 { 00:10:41.987 "dma_device_id": "system", 00:10:41.987 "dma_device_type": 1 00:10:41.987 }, 00:10:41.987 { 00:10:41.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.987 "dma_device_type": 2 00:10:41.987 }, 00:10:41.987 { 00:10:41.987 "dma_device_id": "system", 00:10:41.987 "dma_device_type": 1 00:10:41.987 }, 00:10:41.987 { 00:10:41.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.987 "dma_device_type": 2 00:10:41.987 } 00:10:41.987 ], 00:10:41.987 "driver_specific": { 00:10:41.987 "raid": { 00:10:41.987 "uuid": "75a0f84e-62a8-4eb6-8ebc-938bddac0a6f", 00:10:41.988 "strip_size_kb": 64, 00:10:41.988 "state": "online", 00:10:41.988 "raid_level": "raid0", 00:10:41.988 "superblock": true, 00:10:41.988 "num_base_bdevs": 3, 00:10:41.988 "num_base_bdevs_discovered": 3, 00:10:41.988 "num_base_bdevs_operational": 3, 00:10:41.988 "base_bdevs_list": [ 00:10:41.988 { 00:10:41.988 "name": "pt1", 00:10:41.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:41.988 "is_configured": true, 00:10:41.988 "data_offset": 2048, 00:10:41.988 "data_size": 63488 00:10:41.988 }, 00:10:41.988 { 00:10:41.988 "name": "pt2", 00:10:41.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.988 "is_configured": true, 00:10:41.988 "data_offset": 2048, 00:10:41.988 "data_size": 63488 00:10:41.988 }, 00:10:41.988 { 00:10:41.988 "name": "pt3", 00:10:41.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:41.988 "is_configured": true, 00:10:41.988 "data_offset": 2048, 00:10:41.988 "data_size": 63488 00:10:41.988 } 00:10:41.988 ] 00:10:41.988 } 00:10:41.988 } 00:10:41.988 }' 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:41.988 pt2 00:10:41.988 pt3' 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.988 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.292 [2024-10-17 20:07:27.660198] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 75a0f84e-62a8-4eb6-8ebc-938bddac0a6f '!=' 75a0f84e-62a8-4eb6-8ebc-938bddac0a6f ']' 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64981 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 64981 ']' 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 64981 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64981 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.292 killing process with pid 64981 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64981' 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 64981 00:10:42.292 [2024-10-17 20:07:27.739233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.292 20:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 64981 00:10:42.292 [2024-10-17 20:07:27.739440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.292 [2024-10-17 20:07:27.739546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.292 [2024-10-17 20:07:27.739572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:42.550 [2024-10-17 20:07:27.993634] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.487 ************************************ 00:10:43.487 END TEST raid_superblock_test 00:10:43.487 ************************************ 00:10:43.487 20:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:43.487 00:10:43.487 real 0m5.671s 00:10:43.487 user 0m8.595s 00:10:43.487 sys 0m0.836s 00:10:43.487 20:07:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.487 20:07:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.487 20:07:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:43.487 20:07:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:43.487 20:07:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.487 20:07:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.487 ************************************ 00:10:43.487 START TEST raid_read_error_test 00:10:43.487 ************************************ 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JpwoX6Y7u3 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65245 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65245 00:10:43.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65245 ']' 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.487 20:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.746 [2024-10-17 20:07:29.187843] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:10:43.746 [2024-10-17 20:07:29.188177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65245 ] 00:10:43.746 [2024-10-17 20:07:29.372770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.004 [2024-10-17 20:07:29.512005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.262 [2024-10-17 20:07:29.713625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.262 [2024-10-17 20:07:29.713688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.829 BaseBdev1_malloc 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.829 true 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.829 [2024-10-17 20:07:30.263089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:44.829 [2024-10-17 20:07:30.263170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.829 [2024-10-17 20:07:30.263200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:44.829 [2024-10-17 20:07:30.263217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.829 [2024-10-17 20:07:30.266144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.829 [2024-10-17 20:07:30.266193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:44.829 BaseBdev1 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.829 BaseBdev2_malloc 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.829 true 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.829 [2024-10-17 20:07:30.325735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:44.829 [2024-10-17 20:07:30.326023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.829 [2024-10-17 20:07:30.326096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:44.829 [2024-10-17 20:07:30.326391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.829 [2024-10-17 20:07:30.329502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.829 [2024-10-17 20:07:30.329597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:44.829 BaseBdev2 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.829 BaseBdev3_malloc 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.829 true 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.829 [2024-10-17 20:07:30.401542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:44.829 [2024-10-17 20:07:30.401644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.829 [2024-10-17 20:07:30.401675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:44.829 [2024-10-17 20:07:30.401693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.829 [2024-10-17 20:07:30.404814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.829 [2024-10-17 20:07:30.404894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:44.829 BaseBdev3 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.829 [2024-10-17 20:07:30.413731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.829 [2024-10-17 20:07:30.416310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.829 [2024-10-17 20:07:30.416436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.829 [2024-10-17 20:07:30.416726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:44.829 [2024-10-17 20:07:30.416746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:44.829 [2024-10-17 20:07:30.417123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:44.829 [2024-10-17 20:07:30.417356] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:44.829 [2024-10-17 20:07:30.417392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:44.829 [2024-10-17 20:07:30.417703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.829 "name": "raid_bdev1", 00:10:44.829 "uuid": "1de75e58-b939-4de3-9130-cc079095eb02", 00:10:44.829 "strip_size_kb": 64, 00:10:44.829 "state": "online", 00:10:44.829 "raid_level": "raid0", 00:10:44.829 "superblock": true, 00:10:44.829 "num_base_bdevs": 3, 00:10:44.829 "num_base_bdevs_discovered": 3, 00:10:44.829 "num_base_bdevs_operational": 3, 00:10:44.829 "base_bdevs_list": [ 00:10:44.829 { 00:10:44.829 "name": "BaseBdev1", 00:10:44.829 "uuid": "33e078e6-9aac-52df-82d0-73a371366cf3", 00:10:44.829 "is_configured": true, 00:10:44.829 "data_offset": 2048, 00:10:44.829 "data_size": 63488 00:10:44.829 }, 00:10:44.829 { 00:10:44.829 "name": "BaseBdev2", 00:10:44.829 "uuid": "8c92577f-4755-5983-bc0b-0eeb98720fa7", 00:10:44.829 "is_configured": true, 00:10:44.829 "data_offset": 2048, 00:10:44.829 "data_size": 63488 00:10:44.829 }, 00:10:44.829 { 00:10:44.829 "name": "BaseBdev3", 00:10:44.829 "uuid": "a47bed60-248a-5f3c-b3e0-14b41fbea60b", 00:10:44.829 "is_configured": true, 00:10:44.829 "data_offset": 2048, 00:10:44.829 "data_size": 63488 00:10:44.829 } 00:10:44.829 ] 00:10:44.829 }' 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.829 20:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.404 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:45.404 20:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:45.679 [2024-10-17 20:07:31.107275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.614 20:07:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.614 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.614 20:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.614 "name": "raid_bdev1", 00:10:46.614 "uuid": "1de75e58-b939-4de3-9130-cc079095eb02", 00:10:46.614 "strip_size_kb": 64, 00:10:46.614 "state": "online", 00:10:46.614 "raid_level": "raid0", 00:10:46.614 "superblock": true, 00:10:46.614 "num_base_bdevs": 3, 00:10:46.614 "num_base_bdevs_discovered": 3, 00:10:46.614 "num_base_bdevs_operational": 3, 00:10:46.614 "base_bdevs_list": [ 00:10:46.614 { 00:10:46.614 "name": "BaseBdev1", 00:10:46.614 "uuid": "33e078e6-9aac-52df-82d0-73a371366cf3", 00:10:46.614 "is_configured": true, 00:10:46.614 "data_offset": 2048, 00:10:46.614 "data_size": 63488 00:10:46.614 }, 00:10:46.614 { 00:10:46.614 "name": "BaseBdev2", 00:10:46.614 "uuid": "8c92577f-4755-5983-bc0b-0eeb98720fa7", 00:10:46.614 "is_configured": true, 00:10:46.614 "data_offset": 2048, 00:10:46.614 "data_size": 63488 00:10:46.614 }, 00:10:46.614 { 00:10:46.614 "name": "BaseBdev3", 00:10:46.614 "uuid": "a47bed60-248a-5f3c-b3e0-14b41fbea60b", 00:10:46.614 "is_configured": true, 00:10:46.614 "data_offset": 2048, 00:10:46.614 "data_size": 63488 00:10:46.614 } 00:10:46.614 ] 00:10:46.614 }' 00:10:46.614 20:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.614 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.874 20:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:46.874 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.874 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.874 [2024-10-17 20:07:32.498558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.874 [2024-10-17 20:07:32.498595] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.874 [2024-10-17 20:07:32.501893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.874 [2024-10-17 20:07:32.501966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.874 [2024-10-17 20:07:32.502080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.874 [2024-10-17 20:07:32.502097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:46.874 { 00:10:46.874 "results": [ 00:10:46.874 { 00:10:46.874 "job": "raid_bdev1", 00:10:46.874 "core_mask": "0x1", 00:10:46.874 "workload": "randrw", 00:10:46.874 "percentage": 50, 00:10:46.874 "status": "finished", 00:10:46.874 "queue_depth": 1, 00:10:46.874 "io_size": 131072, 00:10:46.874 "runtime": 1.388523, 00:10:46.874 "iops": 11258.005809050337, 00:10:46.874 "mibps": 1407.2507261312921, 00:10:46.874 "io_failed": 1, 00:10:46.874 "io_timeout": 0, 00:10:46.874 "avg_latency_us": 124.29929694178398, 00:10:46.874 "min_latency_us": 36.77090909090909, 00:10:46.874 "max_latency_us": 1787.3454545454545 00:10:46.874 } 00:10:46.874 ], 00:10:46.874 "core_count": 1 00:10:46.874 } 00:10:46.874 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.874 20:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65245 00:10:46.874 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65245 ']' 00:10:46.874 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65245 00:10:46.874 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:46.874 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.874 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65245 00:10:47.133 killing process with pid 65245 00:10:47.133 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:47.133 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:47.133 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65245' 00:10:47.133 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65245 00:10:47.133 [2024-10-17 20:07:32.542225] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.133 20:07:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65245 00:10:47.133 [2024-10-17 20:07:32.725403] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.511 20:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JpwoX6Y7u3 00:10:48.511 20:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:48.511 20:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:48.511 20:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:48.511 20:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:48.511 20:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.511 20:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:48.511 20:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:48.511 00:10:48.511 real 0m4.732s 00:10:48.511 user 0m5.908s 00:10:48.511 sys 0m0.630s 00:10:48.511 20:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.511 20:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.511 ************************************ 00:10:48.511 END TEST raid_read_error_test 00:10:48.511 ************************************ 00:10:48.511 20:07:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:48.511 20:07:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:48.511 20:07:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.511 20:07:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.511 ************************************ 00:10:48.511 START TEST raid_write_error_test 00:10:48.511 ************************************ 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hR7wMYjiFs 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65385 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65385 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65385 ']' 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.512 20:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.512 [2024-10-17 20:07:33.969641] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:10:48.512 [2024-10-17 20:07:33.969884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65385 ] 00:10:48.512 [2024-10-17 20:07:34.140579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.771 [2024-10-17 20:07:34.267673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.030 [2024-10-17 20:07:34.458361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.030 [2024-10-17 20:07:34.458467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.288 20:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.288 20:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:49.288 20:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:49.288 20:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:49.288 20:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.288 20:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.288 BaseBdev1_malloc 00:10:49.288 20:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.288 20:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:49.547 20:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.547 20:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.547 true 00:10:49.547 20:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.547 20:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:49.547 20:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.547 20:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.547 [2024-10-17 20:07:34.958663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:49.547 [2024-10-17 20:07:34.958753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.547 [2024-10-17 20:07:34.958784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:49.547 [2024-10-17 20:07:34.958804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.547 [2024-10-17 20:07:34.961851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.547 [2024-10-17 20:07:34.961919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:49.547 BaseBdev1 00:10:49.547 20:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.547 20:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:49.547 20:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:49.547 20:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.547 20:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.547 BaseBdev2_malloc 00:10:49.547 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.547 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:49.547 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.547 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.547 true 00:10:49.547 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.547 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:49.547 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.547 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.547 [2024-10-17 20:07:35.024686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:49.547 [2024-10-17 20:07:35.024778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.547 [2024-10-17 20:07:35.024806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:49.548 [2024-10-17 20:07:35.024822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.548 [2024-10-17 20:07:35.027638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.548 [2024-10-17 20:07:35.027696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:49.548 BaseBdev2 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.548 BaseBdev3_malloc 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.548 true 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.548 [2024-10-17 20:07:35.097025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:49.548 [2024-10-17 20:07:35.097125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.548 [2024-10-17 20:07:35.097152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:49.548 [2024-10-17 20:07:35.097170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.548 [2024-10-17 20:07:35.100062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.548 [2024-10-17 20:07:35.100160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:49.548 BaseBdev3 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.548 [2024-10-17 20:07:35.105134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.548 [2024-10-17 20:07:35.107553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.548 [2024-10-17 20:07:35.107680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.548 [2024-10-17 20:07:35.107964] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:49.548 [2024-10-17 20:07:35.108020] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:49.548 [2024-10-17 20:07:35.108345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:49.548 [2024-10-17 20:07:35.108601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:49.548 [2024-10-17 20:07:35.108631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:49.548 [2024-10-17 20:07:35.108808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.548 "name": "raid_bdev1", 00:10:49.548 "uuid": "e414446c-bd81-4cce-b574-fe78d72137c3", 00:10:49.548 "strip_size_kb": 64, 00:10:49.548 "state": "online", 00:10:49.548 "raid_level": "raid0", 00:10:49.548 "superblock": true, 00:10:49.548 "num_base_bdevs": 3, 00:10:49.548 "num_base_bdevs_discovered": 3, 00:10:49.548 "num_base_bdevs_operational": 3, 00:10:49.548 "base_bdevs_list": [ 00:10:49.548 { 00:10:49.548 "name": "BaseBdev1", 00:10:49.548 "uuid": "f877a51e-10fb-5689-96cb-439c1b8ffbd6", 00:10:49.548 "is_configured": true, 00:10:49.548 "data_offset": 2048, 00:10:49.548 "data_size": 63488 00:10:49.548 }, 00:10:49.548 { 00:10:49.548 "name": "BaseBdev2", 00:10:49.548 "uuid": "888b31c3-774b-5018-9825-d11cbe0a9f75", 00:10:49.548 "is_configured": true, 00:10:49.548 "data_offset": 2048, 00:10:49.548 "data_size": 63488 00:10:49.548 }, 00:10:49.548 { 00:10:49.548 "name": "BaseBdev3", 00:10:49.548 "uuid": "dc800965-9ed8-5bd0-b0d3-a52483ee35a1", 00:10:49.548 "is_configured": true, 00:10:49.548 "data_offset": 2048, 00:10:49.548 "data_size": 63488 00:10:49.548 } 00:10:49.548 ] 00:10:49.548 }' 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.548 20:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.115 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:50.115 20:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:50.115 [2024-10-17 20:07:35.734706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.050 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.050 "name": "raid_bdev1", 00:10:51.050 "uuid": "e414446c-bd81-4cce-b574-fe78d72137c3", 00:10:51.050 "strip_size_kb": 64, 00:10:51.050 "state": "online", 00:10:51.050 "raid_level": "raid0", 00:10:51.050 "superblock": true, 00:10:51.050 "num_base_bdevs": 3, 00:10:51.050 "num_base_bdevs_discovered": 3, 00:10:51.050 "num_base_bdevs_operational": 3, 00:10:51.050 "base_bdevs_list": [ 00:10:51.050 { 00:10:51.050 "name": "BaseBdev1", 00:10:51.050 "uuid": "f877a51e-10fb-5689-96cb-439c1b8ffbd6", 00:10:51.050 "is_configured": true, 00:10:51.050 "data_offset": 2048, 00:10:51.051 "data_size": 63488 00:10:51.051 }, 00:10:51.051 { 00:10:51.051 "name": "BaseBdev2", 00:10:51.051 "uuid": "888b31c3-774b-5018-9825-d11cbe0a9f75", 00:10:51.051 "is_configured": true, 00:10:51.051 "data_offset": 2048, 00:10:51.051 "data_size": 63488 00:10:51.051 }, 00:10:51.051 { 00:10:51.051 "name": "BaseBdev3", 00:10:51.051 "uuid": "dc800965-9ed8-5bd0-b0d3-a52483ee35a1", 00:10:51.051 "is_configured": true, 00:10:51.051 "data_offset": 2048, 00:10:51.051 "data_size": 63488 00:10:51.051 } 00:10:51.051 ] 00:10:51.051 }' 00:10:51.051 20:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.051 20:07:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.617 20:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:51.617 20:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.617 20:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.617 [2024-10-17 20:07:37.153905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.617 [2024-10-17 20:07:37.153941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.617 [2024-10-17 20:07:37.157593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.617 [2024-10-17 20:07:37.157668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.617 [2024-10-17 20:07:37.157720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.617 [2024-10-17 20:07:37.157735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:51.617 { 00:10:51.617 "results": [ 00:10:51.617 { 00:10:51.617 "job": "raid_bdev1", 00:10:51.617 "core_mask": "0x1", 00:10:51.617 "workload": "randrw", 00:10:51.617 "percentage": 50, 00:10:51.617 "status": "finished", 00:10:51.617 "queue_depth": 1, 00:10:51.617 "io_size": 131072, 00:10:51.617 "runtime": 1.416717, 00:10:51.617 "iops": 10888.554312540895, 00:10:51.617 "mibps": 1361.0692890676119, 00:10:51.617 "io_failed": 1, 00:10:51.617 "io_timeout": 0, 00:10:51.617 "avg_latency_us": 128.430214323176, 00:10:51.617 "min_latency_us": 32.81454545454545, 00:10:51.617 "max_latency_us": 1586.269090909091 00:10:51.617 } 00:10:51.617 ], 00:10:51.617 "core_count": 1 00:10:51.617 } 00:10:51.617 20:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.617 20:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65385 00:10:51.617 20:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65385 ']' 00:10:51.617 20:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65385 00:10:51.617 20:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:51.617 20:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.617 20:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65385 00:10:51.617 20:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.617 20:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.617 killing process with pid 65385 00:10:51.618 20:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65385' 00:10:51.618 20:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65385 00:10:51.618 [2024-10-17 20:07:37.191457] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.618 20:07:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65385 00:10:51.883 [2024-10-17 20:07:37.371551] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.816 20:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:52.816 20:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hR7wMYjiFs 00:10:52.816 20:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:52.816 20:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:52.816 20:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:52.816 20:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.816 20:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:52.816 20:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:52.816 00:10:52.816 real 0m4.539s 00:10:52.816 user 0m5.619s 00:10:52.816 sys 0m0.574s 00:10:52.816 20:07:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.816 20:07:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.816 ************************************ 00:10:52.816 END TEST raid_write_error_test 00:10:52.816 ************************************ 00:10:52.816 20:07:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:52.816 20:07:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:52.816 20:07:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:52.816 20:07:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.816 20:07:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.816 ************************************ 00:10:52.816 START TEST raid_state_function_test 00:10:52.816 ************************************ 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.816 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65529 00:10:52.817 Process raid pid: 65529 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65529' 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65529 00:10:52.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65529 ']' 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:52.817 20:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.075 [2024-10-17 20:07:38.596180] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:10:53.075 [2024-10-17 20:07:38.596553] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.333 [2024-10-17 20:07:38.763869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.333 [2024-10-17 20:07:38.921207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.592 [2024-10-17 20:07:39.114577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.592 [2024-10-17 20:07:39.114624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.160 [2024-10-17 20:07:39.555906] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.160 [2024-10-17 20:07:39.555979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.160 [2024-10-17 20:07:39.555994] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.160 [2024-10-17 20:07:39.556054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.160 [2024-10-17 20:07:39.556066] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.160 [2024-10-17 20:07:39.556082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.160 "name": "Existed_Raid", 00:10:54.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.160 "strip_size_kb": 64, 00:10:54.160 "state": "configuring", 00:10:54.160 "raid_level": "concat", 00:10:54.160 "superblock": false, 00:10:54.160 "num_base_bdevs": 3, 00:10:54.160 "num_base_bdevs_discovered": 0, 00:10:54.160 "num_base_bdevs_operational": 3, 00:10:54.160 "base_bdevs_list": [ 00:10:54.160 { 00:10:54.160 "name": "BaseBdev1", 00:10:54.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.160 "is_configured": false, 00:10:54.160 "data_offset": 0, 00:10:54.160 "data_size": 0 00:10:54.160 }, 00:10:54.160 { 00:10:54.160 "name": "BaseBdev2", 00:10:54.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.160 "is_configured": false, 00:10:54.160 "data_offset": 0, 00:10:54.160 "data_size": 0 00:10:54.160 }, 00:10:54.160 { 00:10:54.160 "name": "BaseBdev3", 00:10:54.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.160 "is_configured": false, 00:10:54.160 "data_offset": 0, 00:10:54.160 "data_size": 0 00:10:54.160 } 00:10:54.160 ] 00:10:54.160 }' 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.160 20:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.727 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.727 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.727 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.727 [2024-10-17 20:07:40.080069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.727 [2024-10-17 20:07:40.080308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:54.727 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.727 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:54.727 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.727 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.727 [2024-10-17 20:07:40.088091] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.727 [2024-10-17 20:07:40.088182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.727 [2024-10-17 20:07:40.088198] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.727 [2024-10-17 20:07:40.088215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.727 [2024-10-17 20:07:40.088225] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.727 [2024-10-17 20:07:40.088241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.728 [2024-10-17 20:07:40.131302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.728 BaseBdev1 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.728 [ 00:10:54.728 { 00:10:54.728 "name": "BaseBdev1", 00:10:54.728 "aliases": [ 00:10:54.728 "276dba77-184f-4371-9997-a047a20c5404" 00:10:54.728 ], 00:10:54.728 "product_name": "Malloc disk", 00:10:54.728 "block_size": 512, 00:10:54.728 "num_blocks": 65536, 00:10:54.728 "uuid": "276dba77-184f-4371-9997-a047a20c5404", 00:10:54.728 "assigned_rate_limits": { 00:10:54.728 "rw_ios_per_sec": 0, 00:10:54.728 "rw_mbytes_per_sec": 0, 00:10:54.728 "r_mbytes_per_sec": 0, 00:10:54.728 "w_mbytes_per_sec": 0 00:10:54.728 }, 00:10:54.728 "claimed": true, 00:10:54.728 "claim_type": "exclusive_write", 00:10:54.728 "zoned": false, 00:10:54.728 "supported_io_types": { 00:10:54.728 "read": true, 00:10:54.728 "write": true, 00:10:54.728 "unmap": true, 00:10:54.728 "flush": true, 00:10:54.728 "reset": true, 00:10:54.728 "nvme_admin": false, 00:10:54.728 "nvme_io": false, 00:10:54.728 "nvme_io_md": false, 00:10:54.728 "write_zeroes": true, 00:10:54.728 "zcopy": true, 00:10:54.728 "get_zone_info": false, 00:10:54.728 "zone_management": false, 00:10:54.728 "zone_append": false, 00:10:54.728 "compare": false, 00:10:54.728 "compare_and_write": false, 00:10:54.728 "abort": true, 00:10:54.728 "seek_hole": false, 00:10:54.728 "seek_data": false, 00:10:54.728 "copy": true, 00:10:54.728 "nvme_iov_md": false 00:10:54.728 }, 00:10:54.728 "memory_domains": [ 00:10:54.728 { 00:10:54.728 "dma_device_id": "system", 00:10:54.728 "dma_device_type": 1 00:10:54.728 }, 00:10:54.728 { 00:10:54.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.728 "dma_device_type": 2 00:10:54.728 } 00:10:54.728 ], 00:10:54.728 "driver_specific": {} 00:10:54.728 } 00:10:54.728 ] 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.728 "name": "Existed_Raid", 00:10:54.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.728 "strip_size_kb": 64, 00:10:54.728 "state": "configuring", 00:10:54.728 "raid_level": "concat", 00:10:54.728 "superblock": false, 00:10:54.728 "num_base_bdevs": 3, 00:10:54.728 "num_base_bdevs_discovered": 1, 00:10:54.728 "num_base_bdevs_operational": 3, 00:10:54.728 "base_bdevs_list": [ 00:10:54.728 { 00:10:54.728 "name": "BaseBdev1", 00:10:54.728 "uuid": "276dba77-184f-4371-9997-a047a20c5404", 00:10:54.728 "is_configured": true, 00:10:54.728 "data_offset": 0, 00:10:54.728 "data_size": 65536 00:10:54.728 }, 00:10:54.728 { 00:10:54.728 "name": "BaseBdev2", 00:10:54.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.728 "is_configured": false, 00:10:54.728 "data_offset": 0, 00:10:54.728 "data_size": 0 00:10:54.728 }, 00:10:54.728 { 00:10:54.728 "name": "BaseBdev3", 00:10:54.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.728 "is_configured": false, 00:10:54.728 "data_offset": 0, 00:10:54.728 "data_size": 0 00:10:54.728 } 00:10:54.728 ] 00:10:54.728 }' 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.728 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.321 [2024-10-17 20:07:40.691555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.321 [2024-10-17 20:07:40.691616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.321 [2024-10-17 20:07:40.699634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.321 [2024-10-17 20:07:40.702178] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.321 [2024-10-17 20:07:40.702247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.321 [2024-10-17 20:07:40.702262] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.321 [2024-10-17 20:07:40.702279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.321 "name": "Existed_Raid", 00:10:55.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.321 "strip_size_kb": 64, 00:10:55.321 "state": "configuring", 00:10:55.321 "raid_level": "concat", 00:10:55.321 "superblock": false, 00:10:55.321 "num_base_bdevs": 3, 00:10:55.321 "num_base_bdevs_discovered": 1, 00:10:55.321 "num_base_bdevs_operational": 3, 00:10:55.321 "base_bdevs_list": [ 00:10:55.321 { 00:10:55.321 "name": "BaseBdev1", 00:10:55.321 "uuid": "276dba77-184f-4371-9997-a047a20c5404", 00:10:55.321 "is_configured": true, 00:10:55.321 "data_offset": 0, 00:10:55.321 "data_size": 65536 00:10:55.321 }, 00:10:55.321 { 00:10:55.321 "name": "BaseBdev2", 00:10:55.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.321 "is_configured": false, 00:10:55.321 "data_offset": 0, 00:10:55.321 "data_size": 0 00:10:55.321 }, 00:10:55.321 { 00:10:55.321 "name": "BaseBdev3", 00:10:55.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.321 "is_configured": false, 00:10:55.321 "data_offset": 0, 00:10:55.321 "data_size": 0 00:10:55.321 } 00:10:55.321 ] 00:10:55.321 }' 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.321 20:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.579 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:55.579 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.579 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.837 [2024-10-17 20:07:41.255672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.837 BaseBdev2 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.837 [ 00:10:55.837 { 00:10:55.837 "name": "BaseBdev2", 00:10:55.837 "aliases": [ 00:10:55.837 "05ed9a9d-6849-4841-a9df-24736526d530" 00:10:55.837 ], 00:10:55.837 "product_name": "Malloc disk", 00:10:55.837 "block_size": 512, 00:10:55.837 "num_blocks": 65536, 00:10:55.837 "uuid": "05ed9a9d-6849-4841-a9df-24736526d530", 00:10:55.837 "assigned_rate_limits": { 00:10:55.837 "rw_ios_per_sec": 0, 00:10:55.837 "rw_mbytes_per_sec": 0, 00:10:55.837 "r_mbytes_per_sec": 0, 00:10:55.837 "w_mbytes_per_sec": 0 00:10:55.837 }, 00:10:55.837 "claimed": true, 00:10:55.837 "claim_type": "exclusive_write", 00:10:55.837 "zoned": false, 00:10:55.837 "supported_io_types": { 00:10:55.837 "read": true, 00:10:55.837 "write": true, 00:10:55.837 "unmap": true, 00:10:55.837 "flush": true, 00:10:55.837 "reset": true, 00:10:55.837 "nvme_admin": false, 00:10:55.837 "nvme_io": false, 00:10:55.837 "nvme_io_md": false, 00:10:55.837 "write_zeroes": true, 00:10:55.837 "zcopy": true, 00:10:55.837 "get_zone_info": false, 00:10:55.837 "zone_management": false, 00:10:55.837 "zone_append": false, 00:10:55.837 "compare": false, 00:10:55.837 "compare_and_write": false, 00:10:55.837 "abort": true, 00:10:55.837 "seek_hole": false, 00:10:55.837 "seek_data": false, 00:10:55.837 "copy": true, 00:10:55.837 "nvme_iov_md": false 00:10:55.837 }, 00:10:55.837 "memory_domains": [ 00:10:55.837 { 00:10:55.837 "dma_device_id": "system", 00:10:55.837 "dma_device_type": 1 00:10:55.837 }, 00:10:55.837 { 00:10:55.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.837 "dma_device_type": 2 00:10:55.837 } 00:10:55.837 ], 00:10:55.837 "driver_specific": {} 00:10:55.837 } 00:10:55.837 ] 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.837 "name": "Existed_Raid", 00:10:55.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.837 "strip_size_kb": 64, 00:10:55.837 "state": "configuring", 00:10:55.837 "raid_level": "concat", 00:10:55.837 "superblock": false, 00:10:55.837 "num_base_bdevs": 3, 00:10:55.837 "num_base_bdevs_discovered": 2, 00:10:55.837 "num_base_bdevs_operational": 3, 00:10:55.837 "base_bdevs_list": [ 00:10:55.837 { 00:10:55.837 "name": "BaseBdev1", 00:10:55.837 "uuid": "276dba77-184f-4371-9997-a047a20c5404", 00:10:55.837 "is_configured": true, 00:10:55.837 "data_offset": 0, 00:10:55.837 "data_size": 65536 00:10:55.837 }, 00:10:55.837 { 00:10:55.837 "name": "BaseBdev2", 00:10:55.837 "uuid": "05ed9a9d-6849-4841-a9df-24736526d530", 00:10:55.837 "is_configured": true, 00:10:55.837 "data_offset": 0, 00:10:55.837 "data_size": 65536 00:10:55.837 }, 00:10:55.837 { 00:10:55.837 "name": "BaseBdev3", 00:10:55.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.837 "is_configured": false, 00:10:55.837 "data_offset": 0, 00:10:55.837 "data_size": 0 00:10:55.837 } 00:10:55.837 ] 00:10:55.837 }' 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.837 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.404 [2024-10-17 20:07:41.857964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.404 [2024-10-17 20:07:41.858010] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:56.404 [2024-10-17 20:07:41.858029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:56.404 [2024-10-17 20:07:41.858428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:56.404 [2024-10-17 20:07:41.858648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:56.404 [2024-10-17 20:07:41.858672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:56.404 [2024-10-17 20:07:41.859004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.404 BaseBdev3 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.404 [ 00:10:56.404 { 00:10:56.404 "name": "BaseBdev3", 00:10:56.404 "aliases": [ 00:10:56.404 "5fd25470-8154-4030-bcef-b5c4db99f4ac" 00:10:56.404 ], 00:10:56.404 "product_name": "Malloc disk", 00:10:56.404 "block_size": 512, 00:10:56.404 "num_blocks": 65536, 00:10:56.404 "uuid": "5fd25470-8154-4030-bcef-b5c4db99f4ac", 00:10:56.404 "assigned_rate_limits": { 00:10:56.404 "rw_ios_per_sec": 0, 00:10:56.404 "rw_mbytes_per_sec": 0, 00:10:56.404 "r_mbytes_per_sec": 0, 00:10:56.404 "w_mbytes_per_sec": 0 00:10:56.404 }, 00:10:56.404 "claimed": true, 00:10:56.404 "claim_type": "exclusive_write", 00:10:56.404 "zoned": false, 00:10:56.404 "supported_io_types": { 00:10:56.404 "read": true, 00:10:56.404 "write": true, 00:10:56.404 "unmap": true, 00:10:56.404 "flush": true, 00:10:56.404 "reset": true, 00:10:56.404 "nvme_admin": false, 00:10:56.404 "nvme_io": false, 00:10:56.404 "nvme_io_md": false, 00:10:56.404 "write_zeroes": true, 00:10:56.404 "zcopy": true, 00:10:56.404 "get_zone_info": false, 00:10:56.404 "zone_management": false, 00:10:56.404 "zone_append": false, 00:10:56.404 "compare": false, 00:10:56.404 "compare_and_write": false, 00:10:56.404 "abort": true, 00:10:56.404 "seek_hole": false, 00:10:56.404 "seek_data": false, 00:10:56.404 "copy": true, 00:10:56.404 "nvme_iov_md": false 00:10:56.404 }, 00:10:56.404 "memory_domains": [ 00:10:56.404 { 00:10:56.404 "dma_device_id": "system", 00:10:56.404 "dma_device_type": 1 00:10:56.404 }, 00:10:56.404 { 00:10:56.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.404 "dma_device_type": 2 00:10:56.404 } 00:10:56.404 ], 00:10:56.404 "driver_specific": {} 00:10:56.404 } 00:10:56.404 ] 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.404 "name": "Existed_Raid", 00:10:56.404 "uuid": "decfbda2-2bda-4074-8c7a-fbf18e76a7dc", 00:10:56.404 "strip_size_kb": 64, 00:10:56.404 "state": "online", 00:10:56.404 "raid_level": "concat", 00:10:56.404 "superblock": false, 00:10:56.404 "num_base_bdevs": 3, 00:10:56.404 "num_base_bdevs_discovered": 3, 00:10:56.404 "num_base_bdevs_operational": 3, 00:10:56.404 "base_bdevs_list": [ 00:10:56.404 { 00:10:56.404 "name": "BaseBdev1", 00:10:56.404 "uuid": "276dba77-184f-4371-9997-a047a20c5404", 00:10:56.404 "is_configured": true, 00:10:56.404 "data_offset": 0, 00:10:56.404 "data_size": 65536 00:10:56.404 }, 00:10:56.404 { 00:10:56.404 "name": "BaseBdev2", 00:10:56.404 "uuid": "05ed9a9d-6849-4841-a9df-24736526d530", 00:10:56.404 "is_configured": true, 00:10:56.404 "data_offset": 0, 00:10:56.404 "data_size": 65536 00:10:56.404 }, 00:10:56.404 { 00:10:56.404 "name": "BaseBdev3", 00:10:56.404 "uuid": "5fd25470-8154-4030-bcef-b5c4db99f4ac", 00:10:56.404 "is_configured": true, 00:10:56.404 "data_offset": 0, 00:10:56.404 "data_size": 65536 00:10:56.404 } 00:10:56.404 ] 00:10:56.404 }' 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.404 20:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.971 [2024-10-17 20:07:42.426578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.971 "name": "Existed_Raid", 00:10:56.971 "aliases": [ 00:10:56.971 "decfbda2-2bda-4074-8c7a-fbf18e76a7dc" 00:10:56.971 ], 00:10:56.971 "product_name": "Raid Volume", 00:10:56.971 "block_size": 512, 00:10:56.971 "num_blocks": 196608, 00:10:56.971 "uuid": "decfbda2-2bda-4074-8c7a-fbf18e76a7dc", 00:10:56.971 "assigned_rate_limits": { 00:10:56.971 "rw_ios_per_sec": 0, 00:10:56.971 "rw_mbytes_per_sec": 0, 00:10:56.971 "r_mbytes_per_sec": 0, 00:10:56.971 "w_mbytes_per_sec": 0 00:10:56.971 }, 00:10:56.971 "claimed": false, 00:10:56.971 "zoned": false, 00:10:56.971 "supported_io_types": { 00:10:56.971 "read": true, 00:10:56.971 "write": true, 00:10:56.971 "unmap": true, 00:10:56.971 "flush": true, 00:10:56.971 "reset": true, 00:10:56.971 "nvme_admin": false, 00:10:56.971 "nvme_io": false, 00:10:56.971 "nvme_io_md": false, 00:10:56.971 "write_zeroes": true, 00:10:56.971 "zcopy": false, 00:10:56.971 "get_zone_info": false, 00:10:56.971 "zone_management": false, 00:10:56.971 "zone_append": false, 00:10:56.971 "compare": false, 00:10:56.971 "compare_and_write": false, 00:10:56.971 "abort": false, 00:10:56.971 "seek_hole": false, 00:10:56.971 "seek_data": false, 00:10:56.971 "copy": false, 00:10:56.971 "nvme_iov_md": false 00:10:56.971 }, 00:10:56.971 "memory_domains": [ 00:10:56.971 { 00:10:56.971 "dma_device_id": "system", 00:10:56.971 "dma_device_type": 1 00:10:56.971 }, 00:10:56.971 { 00:10:56.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.971 "dma_device_type": 2 00:10:56.971 }, 00:10:56.971 { 00:10:56.971 "dma_device_id": "system", 00:10:56.971 "dma_device_type": 1 00:10:56.971 }, 00:10:56.971 { 00:10:56.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.971 "dma_device_type": 2 00:10:56.971 }, 00:10:56.971 { 00:10:56.971 "dma_device_id": "system", 00:10:56.971 "dma_device_type": 1 00:10:56.971 }, 00:10:56.971 { 00:10:56.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.971 "dma_device_type": 2 00:10:56.971 } 00:10:56.971 ], 00:10:56.971 "driver_specific": { 00:10:56.971 "raid": { 00:10:56.971 "uuid": "decfbda2-2bda-4074-8c7a-fbf18e76a7dc", 00:10:56.971 "strip_size_kb": 64, 00:10:56.971 "state": "online", 00:10:56.971 "raid_level": "concat", 00:10:56.971 "superblock": false, 00:10:56.971 "num_base_bdevs": 3, 00:10:56.971 "num_base_bdevs_discovered": 3, 00:10:56.971 "num_base_bdevs_operational": 3, 00:10:56.971 "base_bdevs_list": [ 00:10:56.971 { 00:10:56.971 "name": "BaseBdev1", 00:10:56.971 "uuid": "276dba77-184f-4371-9997-a047a20c5404", 00:10:56.971 "is_configured": true, 00:10:56.971 "data_offset": 0, 00:10:56.971 "data_size": 65536 00:10:56.971 }, 00:10:56.971 { 00:10:56.971 "name": "BaseBdev2", 00:10:56.971 "uuid": "05ed9a9d-6849-4841-a9df-24736526d530", 00:10:56.971 "is_configured": true, 00:10:56.971 "data_offset": 0, 00:10:56.971 "data_size": 65536 00:10:56.971 }, 00:10:56.971 { 00:10:56.971 "name": "BaseBdev3", 00:10:56.971 "uuid": "5fd25470-8154-4030-bcef-b5c4db99f4ac", 00:10:56.971 "is_configured": true, 00:10:56.971 "data_offset": 0, 00:10:56.971 "data_size": 65536 00:10:56.971 } 00:10:56.971 ] 00:10:56.971 } 00:10:56.971 } 00:10:56.971 }' 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:56.971 BaseBdev2 00:10:56.971 BaseBdev3' 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.971 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.230 [2024-10-17 20:07:42.746296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.230 [2024-10-17 20:07:42.746330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.230 [2024-10-17 20:07:42.746452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.230 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.231 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.231 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.231 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.231 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.231 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.231 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.231 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.489 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.489 "name": "Existed_Raid", 00:10:57.489 "uuid": "decfbda2-2bda-4074-8c7a-fbf18e76a7dc", 00:10:57.489 "strip_size_kb": 64, 00:10:57.489 "state": "offline", 00:10:57.489 "raid_level": "concat", 00:10:57.489 "superblock": false, 00:10:57.489 "num_base_bdevs": 3, 00:10:57.489 "num_base_bdevs_discovered": 2, 00:10:57.489 "num_base_bdevs_operational": 2, 00:10:57.489 "base_bdevs_list": [ 00:10:57.489 { 00:10:57.489 "name": null, 00:10:57.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.489 "is_configured": false, 00:10:57.489 "data_offset": 0, 00:10:57.489 "data_size": 65536 00:10:57.489 }, 00:10:57.489 { 00:10:57.489 "name": "BaseBdev2", 00:10:57.489 "uuid": "05ed9a9d-6849-4841-a9df-24736526d530", 00:10:57.489 "is_configured": true, 00:10:57.489 "data_offset": 0, 00:10:57.489 "data_size": 65536 00:10:57.489 }, 00:10:57.489 { 00:10:57.489 "name": "BaseBdev3", 00:10:57.489 "uuid": "5fd25470-8154-4030-bcef-b5c4db99f4ac", 00:10:57.489 "is_configured": true, 00:10:57.489 "data_offset": 0, 00:10:57.489 "data_size": 65536 00:10:57.489 } 00:10:57.489 ] 00:10:57.489 }' 00:10:57.489 20:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.489 20:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.747 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:57.747 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.747 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.747 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.747 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.747 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.747 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.006 [2024-10-17 20:07:43.422539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.006 [2024-10-17 20:07:43.555420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:58.006 [2024-10-17 20:07:43.555642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.006 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.265 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.266 BaseBdev2 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.266 [ 00:10:58.266 { 00:10:58.266 "name": "BaseBdev2", 00:10:58.266 "aliases": [ 00:10:58.266 "7c630e7a-0c73-43cf-90c3-2c9be1108df2" 00:10:58.266 ], 00:10:58.266 "product_name": "Malloc disk", 00:10:58.266 "block_size": 512, 00:10:58.266 "num_blocks": 65536, 00:10:58.266 "uuid": "7c630e7a-0c73-43cf-90c3-2c9be1108df2", 00:10:58.266 "assigned_rate_limits": { 00:10:58.266 "rw_ios_per_sec": 0, 00:10:58.266 "rw_mbytes_per_sec": 0, 00:10:58.266 "r_mbytes_per_sec": 0, 00:10:58.266 "w_mbytes_per_sec": 0 00:10:58.266 }, 00:10:58.266 "claimed": false, 00:10:58.266 "zoned": false, 00:10:58.266 "supported_io_types": { 00:10:58.266 "read": true, 00:10:58.266 "write": true, 00:10:58.266 "unmap": true, 00:10:58.266 "flush": true, 00:10:58.266 "reset": true, 00:10:58.266 "nvme_admin": false, 00:10:58.266 "nvme_io": false, 00:10:58.266 "nvme_io_md": false, 00:10:58.266 "write_zeroes": true, 00:10:58.266 "zcopy": true, 00:10:58.266 "get_zone_info": false, 00:10:58.266 "zone_management": false, 00:10:58.266 "zone_append": false, 00:10:58.266 "compare": false, 00:10:58.266 "compare_and_write": false, 00:10:58.266 "abort": true, 00:10:58.266 "seek_hole": false, 00:10:58.266 "seek_data": false, 00:10:58.266 "copy": true, 00:10:58.266 "nvme_iov_md": false 00:10:58.266 }, 00:10:58.266 "memory_domains": [ 00:10:58.266 { 00:10:58.266 "dma_device_id": "system", 00:10:58.266 "dma_device_type": 1 00:10:58.266 }, 00:10:58.266 { 00:10:58.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.266 "dma_device_type": 2 00:10:58.266 } 00:10:58.266 ], 00:10:58.266 "driver_specific": {} 00:10:58.266 } 00:10:58.266 ] 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.266 BaseBdev3 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.266 [ 00:10:58.266 { 00:10:58.266 "name": "BaseBdev3", 00:10:58.266 "aliases": [ 00:10:58.266 "01c89bd1-426e-4d58-9678-c3f6d5f8beb7" 00:10:58.266 ], 00:10:58.266 "product_name": "Malloc disk", 00:10:58.266 "block_size": 512, 00:10:58.266 "num_blocks": 65536, 00:10:58.266 "uuid": "01c89bd1-426e-4d58-9678-c3f6d5f8beb7", 00:10:58.266 "assigned_rate_limits": { 00:10:58.266 "rw_ios_per_sec": 0, 00:10:58.266 "rw_mbytes_per_sec": 0, 00:10:58.266 "r_mbytes_per_sec": 0, 00:10:58.266 "w_mbytes_per_sec": 0 00:10:58.266 }, 00:10:58.266 "claimed": false, 00:10:58.266 "zoned": false, 00:10:58.266 "supported_io_types": { 00:10:58.266 "read": true, 00:10:58.266 "write": true, 00:10:58.266 "unmap": true, 00:10:58.266 "flush": true, 00:10:58.266 "reset": true, 00:10:58.266 "nvme_admin": false, 00:10:58.266 "nvme_io": false, 00:10:58.266 "nvme_io_md": false, 00:10:58.266 "write_zeroes": true, 00:10:58.266 "zcopy": true, 00:10:58.266 "get_zone_info": false, 00:10:58.266 "zone_management": false, 00:10:58.266 "zone_append": false, 00:10:58.266 "compare": false, 00:10:58.266 "compare_and_write": false, 00:10:58.266 "abort": true, 00:10:58.266 "seek_hole": false, 00:10:58.266 "seek_data": false, 00:10:58.266 "copy": true, 00:10:58.266 "nvme_iov_md": false 00:10:58.266 }, 00:10:58.266 "memory_domains": [ 00:10:58.266 { 00:10:58.266 "dma_device_id": "system", 00:10:58.266 "dma_device_type": 1 00:10:58.266 }, 00:10:58.266 { 00:10:58.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.266 "dma_device_type": 2 00:10:58.266 } 00:10:58.266 ], 00:10:58.266 "driver_specific": {} 00:10:58.266 } 00:10:58.266 ] 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.266 [2024-10-17 20:07:43.837666] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.266 [2024-10-17 20:07:43.837717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.266 [2024-10-17 20:07:43.837762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.266 [2024-10-17 20:07:43.840081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.266 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.266 "name": "Existed_Raid", 00:10:58.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.266 "strip_size_kb": 64, 00:10:58.266 "state": "configuring", 00:10:58.266 "raid_level": "concat", 00:10:58.266 "superblock": false, 00:10:58.266 "num_base_bdevs": 3, 00:10:58.266 "num_base_bdevs_discovered": 2, 00:10:58.267 "num_base_bdevs_operational": 3, 00:10:58.267 "base_bdevs_list": [ 00:10:58.267 { 00:10:58.267 "name": "BaseBdev1", 00:10:58.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.267 "is_configured": false, 00:10:58.267 "data_offset": 0, 00:10:58.267 "data_size": 0 00:10:58.267 }, 00:10:58.267 { 00:10:58.267 "name": "BaseBdev2", 00:10:58.267 "uuid": "7c630e7a-0c73-43cf-90c3-2c9be1108df2", 00:10:58.267 "is_configured": true, 00:10:58.267 "data_offset": 0, 00:10:58.267 "data_size": 65536 00:10:58.267 }, 00:10:58.267 { 00:10:58.267 "name": "BaseBdev3", 00:10:58.267 "uuid": "01c89bd1-426e-4d58-9678-c3f6d5f8beb7", 00:10:58.267 "is_configured": true, 00:10:58.267 "data_offset": 0, 00:10:58.267 "data_size": 65536 00:10:58.267 } 00:10:58.267 ] 00:10:58.267 }' 00:10:58.267 20:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.267 20:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.834 [2024-10-17 20:07:44.365814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.834 "name": "Existed_Raid", 00:10:58.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.834 "strip_size_kb": 64, 00:10:58.834 "state": "configuring", 00:10:58.834 "raid_level": "concat", 00:10:58.834 "superblock": false, 00:10:58.834 "num_base_bdevs": 3, 00:10:58.834 "num_base_bdevs_discovered": 1, 00:10:58.834 "num_base_bdevs_operational": 3, 00:10:58.834 "base_bdevs_list": [ 00:10:58.834 { 00:10:58.834 "name": "BaseBdev1", 00:10:58.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.834 "is_configured": false, 00:10:58.834 "data_offset": 0, 00:10:58.834 "data_size": 0 00:10:58.834 }, 00:10:58.834 { 00:10:58.834 "name": null, 00:10:58.834 "uuid": "7c630e7a-0c73-43cf-90c3-2c9be1108df2", 00:10:58.834 "is_configured": false, 00:10:58.834 "data_offset": 0, 00:10:58.834 "data_size": 65536 00:10:58.834 }, 00:10:58.834 { 00:10:58.834 "name": "BaseBdev3", 00:10:58.834 "uuid": "01c89bd1-426e-4d58-9678-c3f6d5f8beb7", 00:10:58.834 "is_configured": true, 00:10:58.834 "data_offset": 0, 00:10:58.834 "data_size": 65536 00:10:58.834 } 00:10:58.834 ] 00:10:58.834 }' 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.834 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.402 [2024-10-17 20:07:44.976384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.402 BaseBdev1 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.402 20:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.402 [ 00:10:59.402 { 00:10:59.402 "name": "BaseBdev1", 00:10:59.402 "aliases": [ 00:10:59.402 "687177e7-2aa0-4b9b-8a73-6625d89a18d4" 00:10:59.402 ], 00:10:59.402 "product_name": "Malloc disk", 00:10:59.402 "block_size": 512, 00:10:59.402 "num_blocks": 65536, 00:10:59.402 "uuid": "687177e7-2aa0-4b9b-8a73-6625d89a18d4", 00:10:59.402 "assigned_rate_limits": { 00:10:59.402 "rw_ios_per_sec": 0, 00:10:59.402 "rw_mbytes_per_sec": 0, 00:10:59.402 "r_mbytes_per_sec": 0, 00:10:59.402 "w_mbytes_per_sec": 0 00:10:59.402 }, 00:10:59.402 "claimed": true, 00:10:59.402 "claim_type": "exclusive_write", 00:10:59.402 "zoned": false, 00:10:59.402 "supported_io_types": { 00:10:59.402 "read": true, 00:10:59.402 "write": true, 00:10:59.402 "unmap": true, 00:10:59.402 "flush": true, 00:10:59.402 "reset": true, 00:10:59.402 "nvme_admin": false, 00:10:59.402 "nvme_io": false, 00:10:59.402 "nvme_io_md": false, 00:10:59.402 "write_zeroes": true, 00:10:59.402 "zcopy": true, 00:10:59.402 "get_zone_info": false, 00:10:59.402 "zone_management": false, 00:10:59.402 "zone_append": false, 00:10:59.402 "compare": false, 00:10:59.402 "compare_and_write": false, 00:10:59.402 "abort": true, 00:10:59.402 "seek_hole": false, 00:10:59.402 "seek_data": false, 00:10:59.402 "copy": true, 00:10:59.402 "nvme_iov_md": false 00:10:59.402 }, 00:10:59.402 "memory_domains": [ 00:10:59.402 { 00:10:59.402 "dma_device_id": "system", 00:10:59.402 "dma_device_type": 1 00:10:59.402 }, 00:10:59.402 { 00:10:59.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.402 "dma_device_type": 2 00:10:59.402 } 00:10:59.402 ], 00:10:59.402 "driver_specific": {} 00:10:59.402 } 00:10:59.402 ] 00:10:59.402 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.402 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:59.402 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:59.402 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.402 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.402 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.402 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.403 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.403 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.403 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.403 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.403 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.403 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.403 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.403 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.403 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.403 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.661 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.661 "name": "Existed_Raid", 00:10:59.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.661 "strip_size_kb": 64, 00:10:59.661 "state": "configuring", 00:10:59.661 "raid_level": "concat", 00:10:59.661 "superblock": false, 00:10:59.661 "num_base_bdevs": 3, 00:10:59.661 "num_base_bdevs_discovered": 2, 00:10:59.661 "num_base_bdevs_operational": 3, 00:10:59.661 "base_bdevs_list": [ 00:10:59.661 { 00:10:59.661 "name": "BaseBdev1", 00:10:59.661 "uuid": "687177e7-2aa0-4b9b-8a73-6625d89a18d4", 00:10:59.661 "is_configured": true, 00:10:59.661 "data_offset": 0, 00:10:59.661 "data_size": 65536 00:10:59.661 }, 00:10:59.661 { 00:10:59.661 "name": null, 00:10:59.661 "uuid": "7c630e7a-0c73-43cf-90c3-2c9be1108df2", 00:10:59.661 "is_configured": false, 00:10:59.661 "data_offset": 0, 00:10:59.661 "data_size": 65536 00:10:59.661 }, 00:10:59.661 { 00:10:59.661 "name": "BaseBdev3", 00:10:59.661 "uuid": "01c89bd1-426e-4d58-9678-c3f6d5f8beb7", 00:10:59.661 "is_configured": true, 00:10:59.661 "data_offset": 0, 00:10:59.661 "data_size": 65536 00:10:59.661 } 00:10:59.661 ] 00:10:59.661 }' 00:10:59.661 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.661 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.920 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.920 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:59.920 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.920 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.920 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.179 [2024-10-17 20:07:45.600697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.179 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.179 "name": "Existed_Raid", 00:11:00.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.179 "strip_size_kb": 64, 00:11:00.179 "state": "configuring", 00:11:00.179 "raid_level": "concat", 00:11:00.179 "superblock": false, 00:11:00.179 "num_base_bdevs": 3, 00:11:00.179 "num_base_bdevs_discovered": 1, 00:11:00.179 "num_base_bdevs_operational": 3, 00:11:00.179 "base_bdevs_list": [ 00:11:00.179 { 00:11:00.179 "name": "BaseBdev1", 00:11:00.179 "uuid": "687177e7-2aa0-4b9b-8a73-6625d89a18d4", 00:11:00.179 "is_configured": true, 00:11:00.179 "data_offset": 0, 00:11:00.179 "data_size": 65536 00:11:00.179 }, 00:11:00.179 { 00:11:00.179 "name": null, 00:11:00.179 "uuid": "7c630e7a-0c73-43cf-90c3-2c9be1108df2", 00:11:00.179 "is_configured": false, 00:11:00.179 "data_offset": 0, 00:11:00.179 "data_size": 65536 00:11:00.179 }, 00:11:00.179 { 00:11:00.179 "name": null, 00:11:00.179 "uuid": "01c89bd1-426e-4d58-9678-c3f6d5f8beb7", 00:11:00.179 "is_configured": false, 00:11:00.179 "data_offset": 0, 00:11:00.179 "data_size": 65536 00:11:00.179 } 00:11:00.179 ] 00:11:00.180 }' 00:11:00.180 20:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.180 20:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.747 [2024-10-17 20:07:46.204918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.747 "name": "Existed_Raid", 00:11:00.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.747 "strip_size_kb": 64, 00:11:00.747 "state": "configuring", 00:11:00.747 "raid_level": "concat", 00:11:00.747 "superblock": false, 00:11:00.747 "num_base_bdevs": 3, 00:11:00.747 "num_base_bdevs_discovered": 2, 00:11:00.747 "num_base_bdevs_operational": 3, 00:11:00.747 "base_bdevs_list": [ 00:11:00.747 { 00:11:00.747 "name": "BaseBdev1", 00:11:00.747 "uuid": "687177e7-2aa0-4b9b-8a73-6625d89a18d4", 00:11:00.747 "is_configured": true, 00:11:00.747 "data_offset": 0, 00:11:00.747 "data_size": 65536 00:11:00.747 }, 00:11:00.747 { 00:11:00.747 "name": null, 00:11:00.747 "uuid": "7c630e7a-0c73-43cf-90c3-2c9be1108df2", 00:11:00.747 "is_configured": false, 00:11:00.747 "data_offset": 0, 00:11:00.747 "data_size": 65536 00:11:00.747 }, 00:11:00.747 { 00:11:00.747 "name": "BaseBdev3", 00:11:00.747 "uuid": "01c89bd1-426e-4d58-9678-c3f6d5f8beb7", 00:11:00.747 "is_configured": true, 00:11:00.747 "data_offset": 0, 00:11:00.747 "data_size": 65536 00:11:00.747 } 00:11:00.747 ] 00:11:00.747 }' 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.747 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.314 [2024-10-17 20:07:46.797067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.314 "name": "Existed_Raid", 00:11:01.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.314 "strip_size_kb": 64, 00:11:01.314 "state": "configuring", 00:11:01.314 "raid_level": "concat", 00:11:01.314 "superblock": false, 00:11:01.314 "num_base_bdevs": 3, 00:11:01.314 "num_base_bdevs_discovered": 1, 00:11:01.314 "num_base_bdevs_operational": 3, 00:11:01.314 "base_bdevs_list": [ 00:11:01.314 { 00:11:01.314 "name": null, 00:11:01.314 "uuid": "687177e7-2aa0-4b9b-8a73-6625d89a18d4", 00:11:01.314 "is_configured": false, 00:11:01.314 "data_offset": 0, 00:11:01.314 "data_size": 65536 00:11:01.314 }, 00:11:01.314 { 00:11:01.314 "name": null, 00:11:01.314 "uuid": "7c630e7a-0c73-43cf-90c3-2c9be1108df2", 00:11:01.314 "is_configured": false, 00:11:01.314 "data_offset": 0, 00:11:01.314 "data_size": 65536 00:11:01.314 }, 00:11:01.314 { 00:11:01.314 "name": "BaseBdev3", 00:11:01.314 "uuid": "01c89bd1-426e-4d58-9678-c3f6d5f8beb7", 00:11:01.314 "is_configured": true, 00:11:01.314 "data_offset": 0, 00:11:01.314 "data_size": 65536 00:11:01.314 } 00:11:01.314 ] 00:11:01.314 }' 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.314 20:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.878 [2024-10-17 20:07:47.478613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.878 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.879 20:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.879 20:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.879 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.879 20:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.136 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.137 "name": "Existed_Raid", 00:11:02.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.137 "strip_size_kb": 64, 00:11:02.137 "state": "configuring", 00:11:02.137 "raid_level": "concat", 00:11:02.137 "superblock": false, 00:11:02.137 "num_base_bdevs": 3, 00:11:02.137 "num_base_bdevs_discovered": 2, 00:11:02.137 "num_base_bdevs_operational": 3, 00:11:02.137 "base_bdevs_list": [ 00:11:02.137 { 00:11:02.137 "name": null, 00:11:02.137 "uuid": "687177e7-2aa0-4b9b-8a73-6625d89a18d4", 00:11:02.137 "is_configured": false, 00:11:02.137 "data_offset": 0, 00:11:02.137 "data_size": 65536 00:11:02.137 }, 00:11:02.137 { 00:11:02.137 "name": "BaseBdev2", 00:11:02.137 "uuid": "7c630e7a-0c73-43cf-90c3-2c9be1108df2", 00:11:02.137 "is_configured": true, 00:11:02.137 "data_offset": 0, 00:11:02.137 "data_size": 65536 00:11:02.137 }, 00:11:02.137 { 00:11:02.137 "name": "BaseBdev3", 00:11:02.137 "uuid": "01c89bd1-426e-4d58-9678-c3f6d5f8beb7", 00:11:02.137 "is_configured": true, 00:11:02.137 "data_offset": 0, 00:11:02.137 "data_size": 65536 00:11:02.137 } 00:11:02.137 ] 00:11:02.137 }' 00:11:02.137 20:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.137 20:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.402 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.402 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:02.403 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.403 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.403 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.661 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:02.661 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.661 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:02.661 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.661 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.661 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.661 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 687177e7-2aa0-4b9b-8a73-6625d89a18d4 00:11:02.661 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.661 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.661 [2024-10-17 20:07:48.154340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:02.662 [2024-10-17 20:07:48.154409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:02.662 [2024-10-17 20:07:48.154424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:02.662 [2024-10-17 20:07:48.154706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:02.662 [2024-10-17 20:07:48.154876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:02.662 [2024-10-17 20:07:48.154891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:02.662 [2024-10-17 20:07:48.155239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.662 NewBaseBdev 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.662 [ 00:11:02.662 { 00:11:02.662 "name": "NewBaseBdev", 00:11:02.662 "aliases": [ 00:11:02.662 "687177e7-2aa0-4b9b-8a73-6625d89a18d4" 00:11:02.662 ], 00:11:02.662 "product_name": "Malloc disk", 00:11:02.662 "block_size": 512, 00:11:02.662 "num_blocks": 65536, 00:11:02.662 "uuid": "687177e7-2aa0-4b9b-8a73-6625d89a18d4", 00:11:02.662 "assigned_rate_limits": { 00:11:02.662 "rw_ios_per_sec": 0, 00:11:02.662 "rw_mbytes_per_sec": 0, 00:11:02.662 "r_mbytes_per_sec": 0, 00:11:02.662 "w_mbytes_per_sec": 0 00:11:02.662 }, 00:11:02.662 "claimed": true, 00:11:02.662 "claim_type": "exclusive_write", 00:11:02.662 "zoned": false, 00:11:02.662 "supported_io_types": { 00:11:02.662 "read": true, 00:11:02.662 "write": true, 00:11:02.662 "unmap": true, 00:11:02.662 "flush": true, 00:11:02.662 "reset": true, 00:11:02.662 "nvme_admin": false, 00:11:02.662 "nvme_io": false, 00:11:02.662 "nvme_io_md": false, 00:11:02.662 "write_zeroes": true, 00:11:02.662 "zcopy": true, 00:11:02.662 "get_zone_info": false, 00:11:02.662 "zone_management": false, 00:11:02.662 "zone_append": false, 00:11:02.662 "compare": false, 00:11:02.662 "compare_and_write": false, 00:11:02.662 "abort": true, 00:11:02.662 "seek_hole": false, 00:11:02.662 "seek_data": false, 00:11:02.662 "copy": true, 00:11:02.662 "nvme_iov_md": false 00:11:02.662 }, 00:11:02.662 "memory_domains": [ 00:11:02.662 { 00:11:02.662 "dma_device_id": "system", 00:11:02.662 "dma_device_type": 1 00:11:02.662 }, 00:11:02.662 { 00:11:02.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.662 "dma_device_type": 2 00:11:02.662 } 00:11:02.662 ], 00:11:02.662 "driver_specific": {} 00:11:02.662 } 00:11:02.662 ] 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.662 "name": "Existed_Raid", 00:11:02.662 "uuid": "79e9af0c-17d5-4e3b-857a-a7c3e5742afa", 00:11:02.662 "strip_size_kb": 64, 00:11:02.662 "state": "online", 00:11:02.662 "raid_level": "concat", 00:11:02.662 "superblock": false, 00:11:02.662 "num_base_bdevs": 3, 00:11:02.662 "num_base_bdevs_discovered": 3, 00:11:02.662 "num_base_bdevs_operational": 3, 00:11:02.662 "base_bdevs_list": [ 00:11:02.662 { 00:11:02.662 "name": "NewBaseBdev", 00:11:02.662 "uuid": "687177e7-2aa0-4b9b-8a73-6625d89a18d4", 00:11:02.662 "is_configured": true, 00:11:02.662 "data_offset": 0, 00:11:02.662 "data_size": 65536 00:11:02.662 }, 00:11:02.662 { 00:11:02.662 "name": "BaseBdev2", 00:11:02.662 "uuid": "7c630e7a-0c73-43cf-90c3-2c9be1108df2", 00:11:02.662 "is_configured": true, 00:11:02.662 "data_offset": 0, 00:11:02.662 "data_size": 65536 00:11:02.662 }, 00:11:02.662 { 00:11:02.662 "name": "BaseBdev3", 00:11:02.662 "uuid": "01c89bd1-426e-4d58-9678-c3f6d5f8beb7", 00:11:02.662 "is_configured": true, 00:11:02.662 "data_offset": 0, 00:11:02.662 "data_size": 65536 00:11:02.662 } 00:11:02.662 ] 00:11:02.662 }' 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.662 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.230 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:03.230 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:03.230 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.230 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.230 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.230 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.230 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:03.230 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.230 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.230 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.230 [2024-10-17 20:07:48.722946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.230 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.230 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.230 "name": "Existed_Raid", 00:11:03.230 "aliases": [ 00:11:03.230 "79e9af0c-17d5-4e3b-857a-a7c3e5742afa" 00:11:03.230 ], 00:11:03.230 "product_name": "Raid Volume", 00:11:03.230 "block_size": 512, 00:11:03.230 "num_blocks": 196608, 00:11:03.230 "uuid": "79e9af0c-17d5-4e3b-857a-a7c3e5742afa", 00:11:03.230 "assigned_rate_limits": { 00:11:03.230 "rw_ios_per_sec": 0, 00:11:03.230 "rw_mbytes_per_sec": 0, 00:11:03.230 "r_mbytes_per_sec": 0, 00:11:03.230 "w_mbytes_per_sec": 0 00:11:03.230 }, 00:11:03.230 "claimed": false, 00:11:03.230 "zoned": false, 00:11:03.230 "supported_io_types": { 00:11:03.230 "read": true, 00:11:03.230 "write": true, 00:11:03.230 "unmap": true, 00:11:03.230 "flush": true, 00:11:03.230 "reset": true, 00:11:03.230 "nvme_admin": false, 00:11:03.230 "nvme_io": false, 00:11:03.230 "nvme_io_md": false, 00:11:03.230 "write_zeroes": true, 00:11:03.230 "zcopy": false, 00:11:03.230 "get_zone_info": false, 00:11:03.230 "zone_management": false, 00:11:03.230 "zone_append": false, 00:11:03.230 "compare": false, 00:11:03.230 "compare_and_write": false, 00:11:03.230 "abort": false, 00:11:03.230 "seek_hole": false, 00:11:03.230 "seek_data": false, 00:11:03.230 "copy": false, 00:11:03.230 "nvme_iov_md": false 00:11:03.230 }, 00:11:03.230 "memory_domains": [ 00:11:03.230 { 00:11:03.230 "dma_device_id": "system", 00:11:03.230 "dma_device_type": 1 00:11:03.230 }, 00:11:03.230 { 00:11:03.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.230 "dma_device_type": 2 00:11:03.230 }, 00:11:03.230 { 00:11:03.230 "dma_device_id": "system", 00:11:03.230 "dma_device_type": 1 00:11:03.230 }, 00:11:03.230 { 00:11:03.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.230 "dma_device_type": 2 00:11:03.230 }, 00:11:03.230 { 00:11:03.230 "dma_device_id": "system", 00:11:03.230 "dma_device_type": 1 00:11:03.230 }, 00:11:03.230 { 00:11:03.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.230 "dma_device_type": 2 00:11:03.230 } 00:11:03.230 ], 00:11:03.230 "driver_specific": { 00:11:03.230 "raid": { 00:11:03.230 "uuid": "79e9af0c-17d5-4e3b-857a-a7c3e5742afa", 00:11:03.230 "strip_size_kb": 64, 00:11:03.230 "state": "online", 00:11:03.230 "raid_level": "concat", 00:11:03.230 "superblock": false, 00:11:03.230 "num_base_bdevs": 3, 00:11:03.230 "num_base_bdevs_discovered": 3, 00:11:03.230 "num_base_bdevs_operational": 3, 00:11:03.230 "base_bdevs_list": [ 00:11:03.230 { 00:11:03.230 "name": "NewBaseBdev", 00:11:03.230 "uuid": "687177e7-2aa0-4b9b-8a73-6625d89a18d4", 00:11:03.230 "is_configured": true, 00:11:03.230 "data_offset": 0, 00:11:03.230 "data_size": 65536 00:11:03.230 }, 00:11:03.230 { 00:11:03.230 "name": "BaseBdev2", 00:11:03.230 "uuid": "7c630e7a-0c73-43cf-90c3-2c9be1108df2", 00:11:03.231 "is_configured": true, 00:11:03.231 "data_offset": 0, 00:11:03.231 "data_size": 65536 00:11:03.231 }, 00:11:03.231 { 00:11:03.231 "name": "BaseBdev3", 00:11:03.231 "uuid": "01c89bd1-426e-4d58-9678-c3f6d5f8beb7", 00:11:03.231 "is_configured": true, 00:11:03.231 "data_offset": 0, 00:11:03.231 "data_size": 65536 00:11:03.231 } 00:11:03.231 ] 00:11:03.231 } 00:11:03.231 } 00:11:03.231 }' 00:11:03.231 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.231 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:03.231 BaseBdev2 00:11:03.231 BaseBdev3' 00:11:03.231 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.231 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.231 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.231 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:03.231 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.231 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.231 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.490 20:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.490 [2024-10-17 20:07:49.046704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.490 [2024-10-17 20:07:49.046735] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.490 [2024-10-17 20:07:49.046827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.490 [2024-10-17 20:07:49.046893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.490 [2024-10-17 20:07:49.046910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65529 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65529 ']' 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65529 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65529 00:11:03.490 killing process with pid 65529 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65529' 00:11:03.490 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65529 00:11:03.491 [2024-10-17 20:07:49.086760] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.491 20:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65529 00:11:03.749 [2024-10-17 20:07:49.328618] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.685 20:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:04.685 00:11:04.685 real 0m11.818s 00:11:04.685 user 0m19.884s 00:11:04.685 sys 0m1.563s 00:11:04.685 20:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.685 ************************************ 00:11:04.685 END TEST raid_state_function_test 00:11:04.685 ************************************ 00:11:04.685 20:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.685 20:07:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:11:04.685 20:07:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:04.685 20:07:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.685 20:07:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.685 ************************************ 00:11:04.685 START TEST raid_state_function_test_sb 00:11:04.685 ************************************ 00:11:04.685 20:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:11:04.685 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:04.685 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:04.685 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:04.685 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:04.685 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:04.685 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.685 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:04.685 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66161 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:04.686 Process raid pid: 66161 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66161' 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66161 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66161 ']' 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.686 20:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.944 [2024-10-17 20:07:50.497008] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:11:04.944 [2024-10-17 20:07:50.497177] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.204 [2024-10-17 20:07:50.676934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.204 [2024-10-17 20:07:50.805376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.462 [2024-10-17 20:07:50.995942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.462 [2024-10-17 20:07:50.996038] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.030 [2024-10-17 20:07:51.496320] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.030 [2024-10-17 20:07:51.496616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.030 [2024-10-17 20:07:51.496643] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:06.030 [2024-10-17 20:07:51.496662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:06.030 [2024-10-17 20:07:51.496672] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:06.030 [2024-10-17 20:07:51.496689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.030 "name": "Existed_Raid", 00:11:06.030 "uuid": "5195851f-4475-4405-bed7-62a84e79e281", 00:11:06.030 "strip_size_kb": 64, 00:11:06.030 "state": "configuring", 00:11:06.030 "raid_level": "concat", 00:11:06.030 "superblock": true, 00:11:06.030 "num_base_bdevs": 3, 00:11:06.030 "num_base_bdevs_discovered": 0, 00:11:06.030 "num_base_bdevs_operational": 3, 00:11:06.030 "base_bdevs_list": [ 00:11:06.030 { 00:11:06.030 "name": "BaseBdev1", 00:11:06.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.030 "is_configured": false, 00:11:06.030 "data_offset": 0, 00:11:06.030 "data_size": 0 00:11:06.030 }, 00:11:06.030 { 00:11:06.030 "name": "BaseBdev2", 00:11:06.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.030 "is_configured": false, 00:11:06.030 "data_offset": 0, 00:11:06.030 "data_size": 0 00:11:06.030 }, 00:11:06.030 { 00:11:06.030 "name": "BaseBdev3", 00:11:06.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.030 "is_configured": false, 00:11:06.030 "data_offset": 0, 00:11:06.030 "data_size": 0 00:11:06.030 } 00:11:06.030 ] 00:11:06.030 }' 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.030 20:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.598 [2024-10-17 20:07:52.036382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.598 [2024-10-17 20:07:52.036607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.598 [2024-10-17 20:07:52.044444] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.598 [2024-10-17 20:07:52.044526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.598 [2024-10-17 20:07:52.044541] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:06.598 [2024-10-17 20:07:52.044556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:06.598 [2024-10-17 20:07:52.044565] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:06.598 [2024-10-17 20:07:52.044579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.598 [2024-10-17 20:07:52.085227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.598 BaseBdev1 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.598 [ 00:11:06.598 { 00:11:06.598 "name": "BaseBdev1", 00:11:06.598 "aliases": [ 00:11:06.598 "1e6b1a7e-9044-4e52-b690-0893e0506e0a" 00:11:06.598 ], 00:11:06.598 "product_name": "Malloc disk", 00:11:06.598 "block_size": 512, 00:11:06.598 "num_blocks": 65536, 00:11:06.598 "uuid": "1e6b1a7e-9044-4e52-b690-0893e0506e0a", 00:11:06.598 "assigned_rate_limits": { 00:11:06.598 "rw_ios_per_sec": 0, 00:11:06.598 "rw_mbytes_per_sec": 0, 00:11:06.598 "r_mbytes_per_sec": 0, 00:11:06.598 "w_mbytes_per_sec": 0 00:11:06.598 }, 00:11:06.598 "claimed": true, 00:11:06.598 "claim_type": "exclusive_write", 00:11:06.598 "zoned": false, 00:11:06.598 "supported_io_types": { 00:11:06.598 "read": true, 00:11:06.598 "write": true, 00:11:06.598 "unmap": true, 00:11:06.598 "flush": true, 00:11:06.598 "reset": true, 00:11:06.598 "nvme_admin": false, 00:11:06.598 "nvme_io": false, 00:11:06.598 "nvme_io_md": false, 00:11:06.598 "write_zeroes": true, 00:11:06.598 "zcopy": true, 00:11:06.598 "get_zone_info": false, 00:11:06.598 "zone_management": false, 00:11:06.598 "zone_append": false, 00:11:06.598 "compare": false, 00:11:06.598 "compare_and_write": false, 00:11:06.598 "abort": true, 00:11:06.598 "seek_hole": false, 00:11:06.598 "seek_data": false, 00:11:06.598 "copy": true, 00:11:06.598 "nvme_iov_md": false 00:11:06.598 }, 00:11:06.598 "memory_domains": [ 00:11:06.598 { 00:11:06.598 "dma_device_id": "system", 00:11:06.598 "dma_device_type": 1 00:11:06.598 }, 00:11:06.598 { 00:11:06.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.598 "dma_device_type": 2 00:11:06.598 } 00:11:06.598 ], 00:11:06.598 "driver_specific": {} 00:11:06.598 } 00:11:06.598 ] 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.598 "name": "Existed_Raid", 00:11:06.598 "uuid": "e73164d6-6933-48a0-9ae5-a16ada2b6771", 00:11:06.598 "strip_size_kb": 64, 00:11:06.598 "state": "configuring", 00:11:06.598 "raid_level": "concat", 00:11:06.598 "superblock": true, 00:11:06.598 "num_base_bdevs": 3, 00:11:06.598 "num_base_bdevs_discovered": 1, 00:11:06.598 "num_base_bdevs_operational": 3, 00:11:06.598 "base_bdevs_list": [ 00:11:06.598 { 00:11:06.598 "name": "BaseBdev1", 00:11:06.598 "uuid": "1e6b1a7e-9044-4e52-b690-0893e0506e0a", 00:11:06.598 "is_configured": true, 00:11:06.598 "data_offset": 2048, 00:11:06.598 "data_size": 63488 00:11:06.598 }, 00:11:06.598 { 00:11:06.598 "name": "BaseBdev2", 00:11:06.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.598 "is_configured": false, 00:11:06.598 "data_offset": 0, 00:11:06.598 "data_size": 0 00:11:06.598 }, 00:11:06.598 { 00:11:06.598 "name": "BaseBdev3", 00:11:06.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.598 "is_configured": false, 00:11:06.598 "data_offset": 0, 00:11:06.598 "data_size": 0 00:11:06.598 } 00:11:06.598 ] 00:11:06.598 }' 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.598 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.166 [2024-10-17 20:07:52.661841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:07.166 [2024-10-17 20:07:52.661904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.166 [2024-10-17 20:07:52.673877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.166 [2024-10-17 20:07:52.676453] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:07.166 [2024-10-17 20:07:52.676705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:07.166 [2024-10-17 20:07:52.676734] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:07.166 [2024-10-17 20:07:52.676753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.166 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.166 "name": "Existed_Raid", 00:11:07.166 "uuid": "c7cc3875-00b1-4050-acf6-29f017040932", 00:11:07.166 "strip_size_kb": 64, 00:11:07.166 "state": "configuring", 00:11:07.167 "raid_level": "concat", 00:11:07.167 "superblock": true, 00:11:07.167 "num_base_bdevs": 3, 00:11:07.167 "num_base_bdevs_discovered": 1, 00:11:07.167 "num_base_bdevs_operational": 3, 00:11:07.167 "base_bdevs_list": [ 00:11:07.167 { 00:11:07.167 "name": "BaseBdev1", 00:11:07.167 "uuid": "1e6b1a7e-9044-4e52-b690-0893e0506e0a", 00:11:07.167 "is_configured": true, 00:11:07.167 "data_offset": 2048, 00:11:07.167 "data_size": 63488 00:11:07.167 }, 00:11:07.167 { 00:11:07.167 "name": "BaseBdev2", 00:11:07.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.167 "is_configured": false, 00:11:07.167 "data_offset": 0, 00:11:07.167 "data_size": 0 00:11:07.167 }, 00:11:07.167 { 00:11:07.167 "name": "BaseBdev3", 00:11:07.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.167 "is_configured": false, 00:11:07.167 "data_offset": 0, 00:11:07.167 "data_size": 0 00:11:07.167 } 00:11:07.167 ] 00:11:07.167 }' 00:11:07.167 20:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.167 20:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.735 [2024-10-17 20:07:53.227809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.735 BaseBdev2 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.735 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.735 [ 00:11:07.735 { 00:11:07.735 "name": "BaseBdev2", 00:11:07.735 "aliases": [ 00:11:07.735 "d7d57815-315b-4215-b6c8-4b6227473e9f" 00:11:07.735 ], 00:11:07.735 "product_name": "Malloc disk", 00:11:07.735 "block_size": 512, 00:11:07.735 "num_blocks": 65536, 00:11:07.735 "uuid": "d7d57815-315b-4215-b6c8-4b6227473e9f", 00:11:07.735 "assigned_rate_limits": { 00:11:07.735 "rw_ios_per_sec": 0, 00:11:07.735 "rw_mbytes_per_sec": 0, 00:11:07.735 "r_mbytes_per_sec": 0, 00:11:07.735 "w_mbytes_per_sec": 0 00:11:07.735 }, 00:11:07.735 "claimed": true, 00:11:07.736 "claim_type": "exclusive_write", 00:11:07.736 "zoned": false, 00:11:07.736 "supported_io_types": { 00:11:07.736 "read": true, 00:11:07.736 "write": true, 00:11:07.736 "unmap": true, 00:11:07.736 "flush": true, 00:11:07.736 "reset": true, 00:11:07.736 "nvme_admin": false, 00:11:07.736 "nvme_io": false, 00:11:07.736 "nvme_io_md": false, 00:11:07.736 "write_zeroes": true, 00:11:07.736 "zcopy": true, 00:11:07.736 "get_zone_info": false, 00:11:07.736 "zone_management": false, 00:11:07.736 "zone_append": false, 00:11:07.736 "compare": false, 00:11:07.736 "compare_and_write": false, 00:11:07.736 "abort": true, 00:11:07.736 "seek_hole": false, 00:11:07.736 "seek_data": false, 00:11:07.736 "copy": true, 00:11:07.736 "nvme_iov_md": false 00:11:07.736 }, 00:11:07.736 "memory_domains": [ 00:11:07.736 { 00:11:07.736 "dma_device_id": "system", 00:11:07.736 "dma_device_type": 1 00:11:07.736 }, 00:11:07.736 { 00:11:07.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.736 "dma_device_type": 2 00:11:07.736 } 00:11:07.736 ], 00:11:07.736 "driver_specific": {} 00:11:07.736 } 00:11:07.736 ] 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.736 "name": "Existed_Raid", 00:11:07.736 "uuid": "c7cc3875-00b1-4050-acf6-29f017040932", 00:11:07.736 "strip_size_kb": 64, 00:11:07.736 "state": "configuring", 00:11:07.736 "raid_level": "concat", 00:11:07.736 "superblock": true, 00:11:07.736 "num_base_bdevs": 3, 00:11:07.736 "num_base_bdevs_discovered": 2, 00:11:07.736 "num_base_bdevs_operational": 3, 00:11:07.736 "base_bdevs_list": [ 00:11:07.736 { 00:11:07.736 "name": "BaseBdev1", 00:11:07.736 "uuid": "1e6b1a7e-9044-4e52-b690-0893e0506e0a", 00:11:07.736 "is_configured": true, 00:11:07.736 "data_offset": 2048, 00:11:07.736 "data_size": 63488 00:11:07.736 }, 00:11:07.736 { 00:11:07.736 "name": "BaseBdev2", 00:11:07.736 "uuid": "d7d57815-315b-4215-b6c8-4b6227473e9f", 00:11:07.736 "is_configured": true, 00:11:07.736 "data_offset": 2048, 00:11:07.736 "data_size": 63488 00:11:07.736 }, 00:11:07.736 { 00:11:07.736 "name": "BaseBdev3", 00:11:07.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.736 "is_configured": false, 00:11:07.736 "data_offset": 0, 00:11:07.736 "data_size": 0 00:11:07.736 } 00:11:07.736 ] 00:11:07.736 }' 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.736 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.304 [2024-10-17 20:07:53.833726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.304 [2024-10-17 20:07:53.834053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:08.304 [2024-10-17 20:07:53.834107] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:08.304 BaseBdev3 00:11:08.304 [2024-10-17 20:07:53.834468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:08.304 [2024-10-17 20:07:53.834669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:08.304 [2024-10-17 20:07:53.834694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:08.304 [2024-10-17 20:07:53.834890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.304 [ 00:11:08.304 { 00:11:08.304 "name": "BaseBdev3", 00:11:08.304 "aliases": [ 00:11:08.304 "899758e8-a7c3-46b1-a6a7-e61c9f038cc0" 00:11:08.304 ], 00:11:08.304 "product_name": "Malloc disk", 00:11:08.304 "block_size": 512, 00:11:08.304 "num_blocks": 65536, 00:11:08.304 "uuid": "899758e8-a7c3-46b1-a6a7-e61c9f038cc0", 00:11:08.304 "assigned_rate_limits": { 00:11:08.304 "rw_ios_per_sec": 0, 00:11:08.304 "rw_mbytes_per_sec": 0, 00:11:08.304 "r_mbytes_per_sec": 0, 00:11:08.304 "w_mbytes_per_sec": 0 00:11:08.304 }, 00:11:08.304 "claimed": true, 00:11:08.304 "claim_type": "exclusive_write", 00:11:08.304 "zoned": false, 00:11:08.304 "supported_io_types": { 00:11:08.304 "read": true, 00:11:08.304 "write": true, 00:11:08.304 "unmap": true, 00:11:08.304 "flush": true, 00:11:08.304 "reset": true, 00:11:08.304 "nvme_admin": false, 00:11:08.304 "nvme_io": false, 00:11:08.304 "nvme_io_md": false, 00:11:08.304 "write_zeroes": true, 00:11:08.304 "zcopy": true, 00:11:08.304 "get_zone_info": false, 00:11:08.304 "zone_management": false, 00:11:08.304 "zone_append": false, 00:11:08.304 "compare": false, 00:11:08.304 "compare_and_write": false, 00:11:08.304 "abort": true, 00:11:08.304 "seek_hole": false, 00:11:08.304 "seek_data": false, 00:11:08.304 "copy": true, 00:11:08.304 "nvme_iov_md": false 00:11:08.304 }, 00:11:08.304 "memory_domains": [ 00:11:08.304 { 00:11:08.304 "dma_device_id": "system", 00:11:08.304 "dma_device_type": 1 00:11:08.304 }, 00:11:08.304 { 00:11:08.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.304 "dma_device_type": 2 00:11:08.304 } 00:11:08.304 ], 00:11:08.304 "driver_specific": {} 00:11:08.304 } 00:11:08.304 ] 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.304 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.305 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.305 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.305 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.305 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.305 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.305 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.305 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.305 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.305 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.305 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.305 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.305 "name": "Existed_Raid", 00:11:08.305 "uuid": "c7cc3875-00b1-4050-acf6-29f017040932", 00:11:08.305 "strip_size_kb": 64, 00:11:08.305 "state": "online", 00:11:08.305 "raid_level": "concat", 00:11:08.305 "superblock": true, 00:11:08.305 "num_base_bdevs": 3, 00:11:08.305 "num_base_bdevs_discovered": 3, 00:11:08.305 "num_base_bdevs_operational": 3, 00:11:08.305 "base_bdevs_list": [ 00:11:08.305 { 00:11:08.305 "name": "BaseBdev1", 00:11:08.305 "uuid": "1e6b1a7e-9044-4e52-b690-0893e0506e0a", 00:11:08.305 "is_configured": true, 00:11:08.305 "data_offset": 2048, 00:11:08.305 "data_size": 63488 00:11:08.305 }, 00:11:08.305 { 00:11:08.305 "name": "BaseBdev2", 00:11:08.305 "uuid": "d7d57815-315b-4215-b6c8-4b6227473e9f", 00:11:08.305 "is_configured": true, 00:11:08.305 "data_offset": 2048, 00:11:08.305 "data_size": 63488 00:11:08.305 }, 00:11:08.305 { 00:11:08.305 "name": "BaseBdev3", 00:11:08.305 "uuid": "899758e8-a7c3-46b1-a6a7-e61c9f038cc0", 00:11:08.305 "is_configured": true, 00:11:08.305 "data_offset": 2048, 00:11:08.305 "data_size": 63488 00:11:08.305 } 00:11:08.305 ] 00:11:08.305 }' 00:11:08.305 20:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.305 20:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.889 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:08.889 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:08.889 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:08.889 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:08.889 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:08.889 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:08.889 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:08.889 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:08.889 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.889 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.889 [2024-10-17 20:07:54.382442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.889 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.889 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:08.889 "name": "Existed_Raid", 00:11:08.889 "aliases": [ 00:11:08.890 "c7cc3875-00b1-4050-acf6-29f017040932" 00:11:08.890 ], 00:11:08.890 "product_name": "Raid Volume", 00:11:08.890 "block_size": 512, 00:11:08.890 "num_blocks": 190464, 00:11:08.890 "uuid": "c7cc3875-00b1-4050-acf6-29f017040932", 00:11:08.890 "assigned_rate_limits": { 00:11:08.890 "rw_ios_per_sec": 0, 00:11:08.890 "rw_mbytes_per_sec": 0, 00:11:08.890 "r_mbytes_per_sec": 0, 00:11:08.890 "w_mbytes_per_sec": 0 00:11:08.890 }, 00:11:08.890 "claimed": false, 00:11:08.890 "zoned": false, 00:11:08.890 "supported_io_types": { 00:11:08.890 "read": true, 00:11:08.890 "write": true, 00:11:08.890 "unmap": true, 00:11:08.890 "flush": true, 00:11:08.890 "reset": true, 00:11:08.890 "nvme_admin": false, 00:11:08.890 "nvme_io": false, 00:11:08.890 "nvme_io_md": false, 00:11:08.890 "write_zeroes": true, 00:11:08.890 "zcopy": false, 00:11:08.890 "get_zone_info": false, 00:11:08.890 "zone_management": false, 00:11:08.890 "zone_append": false, 00:11:08.890 "compare": false, 00:11:08.890 "compare_and_write": false, 00:11:08.890 "abort": false, 00:11:08.890 "seek_hole": false, 00:11:08.890 "seek_data": false, 00:11:08.890 "copy": false, 00:11:08.890 "nvme_iov_md": false 00:11:08.890 }, 00:11:08.890 "memory_domains": [ 00:11:08.890 { 00:11:08.890 "dma_device_id": "system", 00:11:08.890 "dma_device_type": 1 00:11:08.890 }, 00:11:08.890 { 00:11:08.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.890 "dma_device_type": 2 00:11:08.890 }, 00:11:08.890 { 00:11:08.890 "dma_device_id": "system", 00:11:08.890 "dma_device_type": 1 00:11:08.890 }, 00:11:08.890 { 00:11:08.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.890 "dma_device_type": 2 00:11:08.890 }, 00:11:08.890 { 00:11:08.890 "dma_device_id": "system", 00:11:08.890 "dma_device_type": 1 00:11:08.890 }, 00:11:08.890 { 00:11:08.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.890 "dma_device_type": 2 00:11:08.890 } 00:11:08.890 ], 00:11:08.890 "driver_specific": { 00:11:08.890 "raid": { 00:11:08.890 "uuid": "c7cc3875-00b1-4050-acf6-29f017040932", 00:11:08.890 "strip_size_kb": 64, 00:11:08.890 "state": "online", 00:11:08.890 "raid_level": "concat", 00:11:08.890 "superblock": true, 00:11:08.890 "num_base_bdevs": 3, 00:11:08.890 "num_base_bdevs_discovered": 3, 00:11:08.890 "num_base_bdevs_operational": 3, 00:11:08.890 "base_bdevs_list": [ 00:11:08.890 { 00:11:08.890 "name": "BaseBdev1", 00:11:08.890 "uuid": "1e6b1a7e-9044-4e52-b690-0893e0506e0a", 00:11:08.890 "is_configured": true, 00:11:08.890 "data_offset": 2048, 00:11:08.890 "data_size": 63488 00:11:08.890 }, 00:11:08.890 { 00:11:08.890 "name": "BaseBdev2", 00:11:08.890 "uuid": "d7d57815-315b-4215-b6c8-4b6227473e9f", 00:11:08.890 "is_configured": true, 00:11:08.890 "data_offset": 2048, 00:11:08.890 "data_size": 63488 00:11:08.890 }, 00:11:08.890 { 00:11:08.890 "name": "BaseBdev3", 00:11:08.890 "uuid": "899758e8-a7c3-46b1-a6a7-e61c9f038cc0", 00:11:08.890 "is_configured": true, 00:11:08.890 "data_offset": 2048, 00:11:08.890 "data_size": 63488 00:11:08.890 } 00:11:08.890 ] 00:11:08.890 } 00:11:08.890 } 00:11:08.890 }' 00:11:08.890 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:08.890 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:08.890 BaseBdev2 00:11:08.890 BaseBdev3' 00:11:08.890 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.890 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:08.890 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.890 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:08.890 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.890 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.890 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.149 [2024-10-17 20:07:54.706144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:09.149 [2024-10-17 20:07:54.706201] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.149 [2024-10-17 20:07:54.706268] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.149 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.408 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.408 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.408 "name": "Existed_Raid", 00:11:09.408 "uuid": "c7cc3875-00b1-4050-acf6-29f017040932", 00:11:09.408 "strip_size_kb": 64, 00:11:09.408 "state": "offline", 00:11:09.408 "raid_level": "concat", 00:11:09.408 "superblock": true, 00:11:09.408 "num_base_bdevs": 3, 00:11:09.408 "num_base_bdevs_discovered": 2, 00:11:09.408 "num_base_bdevs_operational": 2, 00:11:09.408 "base_bdevs_list": [ 00:11:09.408 { 00:11:09.408 "name": null, 00:11:09.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.408 "is_configured": false, 00:11:09.408 "data_offset": 0, 00:11:09.408 "data_size": 63488 00:11:09.408 }, 00:11:09.408 { 00:11:09.408 "name": "BaseBdev2", 00:11:09.408 "uuid": "d7d57815-315b-4215-b6c8-4b6227473e9f", 00:11:09.408 "is_configured": true, 00:11:09.408 "data_offset": 2048, 00:11:09.408 "data_size": 63488 00:11:09.408 }, 00:11:09.408 { 00:11:09.408 "name": "BaseBdev3", 00:11:09.408 "uuid": "899758e8-a7c3-46b1-a6a7-e61c9f038cc0", 00:11:09.408 "is_configured": true, 00:11:09.408 "data_offset": 2048, 00:11:09.408 "data_size": 63488 00:11:09.408 } 00:11:09.408 ] 00:11:09.408 }' 00:11:09.408 20:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.408 20:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.667 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:09.667 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.667 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:09.667 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.667 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.667 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.926 [2024-10-17 20:07:55.368475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.926 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.926 [2024-10-17 20:07:55.510240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:09.926 [2024-10-17 20:07:55.510310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.185 BaseBdev2 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.185 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.185 [ 00:11:10.185 { 00:11:10.185 "name": "BaseBdev2", 00:11:10.185 "aliases": [ 00:11:10.185 "ff50a1bd-9747-4401-9c6f-79afd80c9dbd" 00:11:10.185 ], 00:11:10.185 "product_name": "Malloc disk", 00:11:10.185 "block_size": 512, 00:11:10.185 "num_blocks": 65536, 00:11:10.185 "uuid": "ff50a1bd-9747-4401-9c6f-79afd80c9dbd", 00:11:10.185 "assigned_rate_limits": { 00:11:10.185 "rw_ios_per_sec": 0, 00:11:10.185 "rw_mbytes_per_sec": 0, 00:11:10.185 "r_mbytes_per_sec": 0, 00:11:10.185 "w_mbytes_per_sec": 0 00:11:10.185 }, 00:11:10.185 "claimed": false, 00:11:10.185 "zoned": false, 00:11:10.185 "supported_io_types": { 00:11:10.185 "read": true, 00:11:10.185 "write": true, 00:11:10.185 "unmap": true, 00:11:10.185 "flush": true, 00:11:10.185 "reset": true, 00:11:10.185 "nvme_admin": false, 00:11:10.185 "nvme_io": false, 00:11:10.185 "nvme_io_md": false, 00:11:10.185 "write_zeroes": true, 00:11:10.185 "zcopy": true, 00:11:10.185 "get_zone_info": false, 00:11:10.185 "zone_management": false, 00:11:10.185 "zone_append": false, 00:11:10.185 "compare": false, 00:11:10.185 "compare_and_write": false, 00:11:10.185 "abort": true, 00:11:10.185 "seek_hole": false, 00:11:10.185 "seek_data": false, 00:11:10.185 "copy": true, 00:11:10.185 "nvme_iov_md": false 00:11:10.185 }, 00:11:10.185 "memory_domains": [ 00:11:10.185 { 00:11:10.185 "dma_device_id": "system", 00:11:10.185 "dma_device_type": 1 00:11:10.185 }, 00:11:10.185 { 00:11:10.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.186 "dma_device_type": 2 00:11:10.186 } 00:11:10.186 ], 00:11:10.186 "driver_specific": {} 00:11:10.186 } 00:11:10.186 ] 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.186 BaseBdev3 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.186 [ 00:11:10.186 { 00:11:10.186 "name": "BaseBdev3", 00:11:10.186 "aliases": [ 00:11:10.186 "ec8b2709-7c50-4cc3-8736-68f9c1ed89b6" 00:11:10.186 ], 00:11:10.186 "product_name": "Malloc disk", 00:11:10.186 "block_size": 512, 00:11:10.186 "num_blocks": 65536, 00:11:10.186 "uuid": "ec8b2709-7c50-4cc3-8736-68f9c1ed89b6", 00:11:10.186 "assigned_rate_limits": { 00:11:10.186 "rw_ios_per_sec": 0, 00:11:10.186 "rw_mbytes_per_sec": 0, 00:11:10.186 "r_mbytes_per_sec": 0, 00:11:10.186 "w_mbytes_per_sec": 0 00:11:10.186 }, 00:11:10.186 "claimed": false, 00:11:10.186 "zoned": false, 00:11:10.186 "supported_io_types": { 00:11:10.186 "read": true, 00:11:10.186 "write": true, 00:11:10.186 "unmap": true, 00:11:10.186 "flush": true, 00:11:10.186 "reset": true, 00:11:10.186 "nvme_admin": false, 00:11:10.186 "nvme_io": false, 00:11:10.186 "nvme_io_md": false, 00:11:10.186 "write_zeroes": true, 00:11:10.186 "zcopy": true, 00:11:10.186 "get_zone_info": false, 00:11:10.186 "zone_management": false, 00:11:10.186 "zone_append": false, 00:11:10.186 "compare": false, 00:11:10.186 "compare_and_write": false, 00:11:10.186 "abort": true, 00:11:10.186 "seek_hole": false, 00:11:10.186 "seek_data": false, 00:11:10.186 "copy": true, 00:11:10.186 "nvme_iov_md": false 00:11:10.186 }, 00:11:10.186 "memory_domains": [ 00:11:10.186 { 00:11:10.186 "dma_device_id": "system", 00:11:10.186 "dma_device_type": 1 00:11:10.186 }, 00:11:10.186 { 00:11:10.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.186 "dma_device_type": 2 00:11:10.186 } 00:11:10.186 ], 00:11:10.186 "driver_specific": {} 00:11:10.186 } 00:11:10.186 ] 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.186 [2024-10-17 20:07:55.799035] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:10.186 [2024-10-17 20:07:55.799083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:10.186 [2024-10-17 20:07:55.799129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.186 [2024-10-17 20:07:55.801488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.186 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.445 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.445 "name": "Existed_Raid", 00:11:10.445 "uuid": "33fb705f-7ffa-4064-a3f6-a72216db8d20", 00:11:10.445 "strip_size_kb": 64, 00:11:10.445 "state": "configuring", 00:11:10.445 "raid_level": "concat", 00:11:10.445 "superblock": true, 00:11:10.445 "num_base_bdevs": 3, 00:11:10.445 "num_base_bdevs_discovered": 2, 00:11:10.445 "num_base_bdevs_operational": 3, 00:11:10.445 "base_bdevs_list": [ 00:11:10.445 { 00:11:10.445 "name": "BaseBdev1", 00:11:10.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.445 "is_configured": false, 00:11:10.445 "data_offset": 0, 00:11:10.445 "data_size": 0 00:11:10.445 }, 00:11:10.445 { 00:11:10.445 "name": "BaseBdev2", 00:11:10.445 "uuid": "ff50a1bd-9747-4401-9c6f-79afd80c9dbd", 00:11:10.445 "is_configured": true, 00:11:10.445 "data_offset": 2048, 00:11:10.445 "data_size": 63488 00:11:10.445 }, 00:11:10.445 { 00:11:10.445 "name": "BaseBdev3", 00:11:10.445 "uuid": "ec8b2709-7c50-4cc3-8736-68f9c1ed89b6", 00:11:10.445 "is_configured": true, 00:11:10.445 "data_offset": 2048, 00:11:10.445 "data_size": 63488 00:11:10.445 } 00:11:10.445 ] 00:11:10.445 }' 00:11:10.445 20:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.445 20:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.704 [2024-10-17 20:07:56.339187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.704 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.963 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.963 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.963 "name": "Existed_Raid", 00:11:10.963 "uuid": "33fb705f-7ffa-4064-a3f6-a72216db8d20", 00:11:10.963 "strip_size_kb": 64, 00:11:10.963 "state": "configuring", 00:11:10.963 "raid_level": "concat", 00:11:10.963 "superblock": true, 00:11:10.963 "num_base_bdevs": 3, 00:11:10.963 "num_base_bdevs_discovered": 1, 00:11:10.963 "num_base_bdevs_operational": 3, 00:11:10.963 "base_bdevs_list": [ 00:11:10.963 { 00:11:10.963 "name": "BaseBdev1", 00:11:10.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.963 "is_configured": false, 00:11:10.963 "data_offset": 0, 00:11:10.963 "data_size": 0 00:11:10.963 }, 00:11:10.963 { 00:11:10.963 "name": null, 00:11:10.963 "uuid": "ff50a1bd-9747-4401-9c6f-79afd80c9dbd", 00:11:10.963 "is_configured": false, 00:11:10.963 "data_offset": 0, 00:11:10.963 "data_size": 63488 00:11:10.963 }, 00:11:10.963 { 00:11:10.963 "name": "BaseBdev3", 00:11:10.963 "uuid": "ec8b2709-7c50-4cc3-8736-68f9c1ed89b6", 00:11:10.963 "is_configured": true, 00:11:10.963 "data_offset": 2048, 00:11:10.963 "data_size": 63488 00:11:10.963 } 00:11:10.963 ] 00:11:10.963 }' 00:11:10.963 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.963 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.230 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.230 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:11.230 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.230 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.230 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.488 [2024-10-17 20:07:56.954348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.488 BaseBdev1 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.488 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.489 [ 00:11:11.489 { 00:11:11.489 "name": "BaseBdev1", 00:11:11.489 "aliases": [ 00:11:11.489 "e4a66e4c-d01c-411b-96d6-13d5a50fa31a" 00:11:11.489 ], 00:11:11.489 "product_name": "Malloc disk", 00:11:11.489 "block_size": 512, 00:11:11.489 "num_blocks": 65536, 00:11:11.489 "uuid": "e4a66e4c-d01c-411b-96d6-13d5a50fa31a", 00:11:11.489 "assigned_rate_limits": { 00:11:11.489 "rw_ios_per_sec": 0, 00:11:11.489 "rw_mbytes_per_sec": 0, 00:11:11.489 "r_mbytes_per_sec": 0, 00:11:11.489 "w_mbytes_per_sec": 0 00:11:11.489 }, 00:11:11.489 "claimed": true, 00:11:11.489 "claim_type": "exclusive_write", 00:11:11.489 "zoned": false, 00:11:11.489 "supported_io_types": { 00:11:11.489 "read": true, 00:11:11.489 "write": true, 00:11:11.489 "unmap": true, 00:11:11.489 "flush": true, 00:11:11.489 "reset": true, 00:11:11.489 "nvme_admin": false, 00:11:11.489 "nvme_io": false, 00:11:11.489 "nvme_io_md": false, 00:11:11.489 "write_zeroes": true, 00:11:11.489 "zcopy": true, 00:11:11.489 "get_zone_info": false, 00:11:11.489 "zone_management": false, 00:11:11.489 "zone_append": false, 00:11:11.489 "compare": false, 00:11:11.489 "compare_and_write": false, 00:11:11.489 "abort": true, 00:11:11.489 "seek_hole": false, 00:11:11.489 "seek_data": false, 00:11:11.489 "copy": true, 00:11:11.489 "nvme_iov_md": false 00:11:11.489 }, 00:11:11.489 "memory_domains": [ 00:11:11.489 { 00:11:11.489 "dma_device_id": "system", 00:11:11.489 "dma_device_type": 1 00:11:11.489 }, 00:11:11.489 { 00:11:11.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.489 "dma_device_type": 2 00:11:11.489 } 00:11:11.489 ], 00:11:11.489 "driver_specific": {} 00:11:11.489 } 00:11:11.489 ] 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.489 20:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.489 20:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.489 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.489 "name": "Existed_Raid", 00:11:11.489 "uuid": "33fb705f-7ffa-4064-a3f6-a72216db8d20", 00:11:11.489 "strip_size_kb": 64, 00:11:11.489 "state": "configuring", 00:11:11.489 "raid_level": "concat", 00:11:11.489 "superblock": true, 00:11:11.489 "num_base_bdevs": 3, 00:11:11.489 "num_base_bdevs_discovered": 2, 00:11:11.489 "num_base_bdevs_operational": 3, 00:11:11.489 "base_bdevs_list": [ 00:11:11.489 { 00:11:11.489 "name": "BaseBdev1", 00:11:11.489 "uuid": "e4a66e4c-d01c-411b-96d6-13d5a50fa31a", 00:11:11.489 "is_configured": true, 00:11:11.489 "data_offset": 2048, 00:11:11.489 "data_size": 63488 00:11:11.489 }, 00:11:11.489 { 00:11:11.489 "name": null, 00:11:11.489 "uuid": "ff50a1bd-9747-4401-9c6f-79afd80c9dbd", 00:11:11.489 "is_configured": false, 00:11:11.489 "data_offset": 0, 00:11:11.489 "data_size": 63488 00:11:11.489 }, 00:11:11.489 { 00:11:11.489 "name": "BaseBdev3", 00:11:11.489 "uuid": "ec8b2709-7c50-4cc3-8736-68f9c1ed89b6", 00:11:11.489 "is_configured": true, 00:11:11.489 "data_offset": 2048, 00:11:11.489 "data_size": 63488 00:11:11.489 } 00:11:11.489 ] 00:11:11.489 }' 00:11:11.489 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.489 20:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.066 [2024-10-17 20:07:57.542635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.066 "name": "Existed_Raid", 00:11:12.066 "uuid": "33fb705f-7ffa-4064-a3f6-a72216db8d20", 00:11:12.066 "strip_size_kb": 64, 00:11:12.066 "state": "configuring", 00:11:12.066 "raid_level": "concat", 00:11:12.066 "superblock": true, 00:11:12.066 "num_base_bdevs": 3, 00:11:12.066 "num_base_bdevs_discovered": 1, 00:11:12.066 "num_base_bdevs_operational": 3, 00:11:12.066 "base_bdevs_list": [ 00:11:12.066 { 00:11:12.066 "name": "BaseBdev1", 00:11:12.066 "uuid": "e4a66e4c-d01c-411b-96d6-13d5a50fa31a", 00:11:12.066 "is_configured": true, 00:11:12.066 "data_offset": 2048, 00:11:12.066 "data_size": 63488 00:11:12.066 }, 00:11:12.066 { 00:11:12.066 "name": null, 00:11:12.066 "uuid": "ff50a1bd-9747-4401-9c6f-79afd80c9dbd", 00:11:12.066 "is_configured": false, 00:11:12.066 "data_offset": 0, 00:11:12.066 "data_size": 63488 00:11:12.066 }, 00:11:12.066 { 00:11:12.066 "name": null, 00:11:12.066 "uuid": "ec8b2709-7c50-4cc3-8736-68f9c1ed89b6", 00:11:12.066 "is_configured": false, 00:11:12.066 "data_offset": 0, 00:11:12.066 "data_size": 63488 00:11:12.066 } 00:11:12.066 ] 00:11:12.066 }' 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.066 20:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.632 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.632 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.632 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.632 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.633 [2024-10-17 20:07:58.110817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.633 "name": "Existed_Raid", 00:11:12.633 "uuid": "33fb705f-7ffa-4064-a3f6-a72216db8d20", 00:11:12.633 "strip_size_kb": 64, 00:11:12.633 "state": "configuring", 00:11:12.633 "raid_level": "concat", 00:11:12.633 "superblock": true, 00:11:12.633 "num_base_bdevs": 3, 00:11:12.633 "num_base_bdevs_discovered": 2, 00:11:12.633 "num_base_bdevs_operational": 3, 00:11:12.633 "base_bdevs_list": [ 00:11:12.633 { 00:11:12.633 "name": "BaseBdev1", 00:11:12.633 "uuid": "e4a66e4c-d01c-411b-96d6-13d5a50fa31a", 00:11:12.633 "is_configured": true, 00:11:12.633 "data_offset": 2048, 00:11:12.633 "data_size": 63488 00:11:12.633 }, 00:11:12.633 { 00:11:12.633 "name": null, 00:11:12.633 "uuid": "ff50a1bd-9747-4401-9c6f-79afd80c9dbd", 00:11:12.633 "is_configured": false, 00:11:12.633 "data_offset": 0, 00:11:12.633 "data_size": 63488 00:11:12.633 }, 00:11:12.633 { 00:11:12.633 "name": "BaseBdev3", 00:11:12.633 "uuid": "ec8b2709-7c50-4cc3-8736-68f9c1ed89b6", 00:11:12.633 "is_configured": true, 00:11:12.633 "data_offset": 2048, 00:11:12.633 "data_size": 63488 00:11:12.633 } 00:11:12.633 ] 00:11:12.633 }' 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.633 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.199 [2024-10-17 20:07:58.666993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.199 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.199 "name": "Existed_Raid", 00:11:13.199 "uuid": "33fb705f-7ffa-4064-a3f6-a72216db8d20", 00:11:13.199 "strip_size_kb": 64, 00:11:13.199 "state": "configuring", 00:11:13.199 "raid_level": "concat", 00:11:13.199 "superblock": true, 00:11:13.199 "num_base_bdevs": 3, 00:11:13.200 "num_base_bdevs_discovered": 1, 00:11:13.200 "num_base_bdevs_operational": 3, 00:11:13.200 "base_bdevs_list": [ 00:11:13.200 { 00:11:13.200 "name": null, 00:11:13.200 "uuid": "e4a66e4c-d01c-411b-96d6-13d5a50fa31a", 00:11:13.200 "is_configured": false, 00:11:13.200 "data_offset": 0, 00:11:13.200 "data_size": 63488 00:11:13.200 }, 00:11:13.200 { 00:11:13.200 "name": null, 00:11:13.200 "uuid": "ff50a1bd-9747-4401-9c6f-79afd80c9dbd", 00:11:13.200 "is_configured": false, 00:11:13.200 "data_offset": 0, 00:11:13.200 "data_size": 63488 00:11:13.200 }, 00:11:13.200 { 00:11:13.200 "name": "BaseBdev3", 00:11:13.200 "uuid": "ec8b2709-7c50-4cc3-8736-68f9c1ed89b6", 00:11:13.200 "is_configured": true, 00:11:13.200 "data_offset": 2048, 00:11:13.200 "data_size": 63488 00:11:13.200 } 00:11:13.200 ] 00:11:13.200 }' 00:11:13.200 20:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.200 20:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.766 [2024-10-17 20:07:59.344338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.766 "name": "Existed_Raid", 00:11:13.766 "uuid": "33fb705f-7ffa-4064-a3f6-a72216db8d20", 00:11:13.766 "strip_size_kb": 64, 00:11:13.766 "state": "configuring", 00:11:13.766 "raid_level": "concat", 00:11:13.766 "superblock": true, 00:11:13.766 "num_base_bdevs": 3, 00:11:13.766 "num_base_bdevs_discovered": 2, 00:11:13.766 "num_base_bdevs_operational": 3, 00:11:13.766 "base_bdevs_list": [ 00:11:13.766 { 00:11:13.766 "name": null, 00:11:13.766 "uuid": "e4a66e4c-d01c-411b-96d6-13d5a50fa31a", 00:11:13.766 "is_configured": false, 00:11:13.766 "data_offset": 0, 00:11:13.766 "data_size": 63488 00:11:13.766 }, 00:11:13.766 { 00:11:13.766 "name": "BaseBdev2", 00:11:13.766 "uuid": "ff50a1bd-9747-4401-9c6f-79afd80c9dbd", 00:11:13.766 "is_configured": true, 00:11:13.766 "data_offset": 2048, 00:11:13.766 "data_size": 63488 00:11:13.766 }, 00:11:13.766 { 00:11:13.766 "name": "BaseBdev3", 00:11:13.766 "uuid": "ec8b2709-7c50-4cc3-8736-68f9c1ed89b6", 00:11:13.766 "is_configured": true, 00:11:13.766 "data_offset": 2048, 00:11:13.766 "data_size": 63488 00:11:13.766 } 00:11:13.766 ] 00:11:13.766 }' 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.766 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.333 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.333 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:14.333 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.333 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.333 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.591 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:14.591 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.591 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.591 20:07:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.591 20:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e4a66e4c-d01c-411b-96d6-13d5a50fa31a 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.591 [2024-10-17 20:08:00.075624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:14.591 [2024-10-17 20:08:00.075882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:14.591 [2024-10-17 20:08:00.075905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:14.591 NewBaseBdev 00:11:14.591 [2024-10-17 20:08:00.076267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:14.591 [2024-10-17 20:08:00.076456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:14.591 [2024-10-17 20:08:00.076473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:14.591 [2024-10-17 20:08:00.076649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.591 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.591 [ 00:11:14.591 { 00:11:14.591 "name": "NewBaseBdev", 00:11:14.591 "aliases": [ 00:11:14.591 "e4a66e4c-d01c-411b-96d6-13d5a50fa31a" 00:11:14.591 ], 00:11:14.591 "product_name": "Malloc disk", 00:11:14.591 "block_size": 512, 00:11:14.591 "num_blocks": 65536, 00:11:14.591 "uuid": "e4a66e4c-d01c-411b-96d6-13d5a50fa31a", 00:11:14.591 "assigned_rate_limits": { 00:11:14.591 "rw_ios_per_sec": 0, 00:11:14.591 "rw_mbytes_per_sec": 0, 00:11:14.591 "r_mbytes_per_sec": 0, 00:11:14.591 "w_mbytes_per_sec": 0 00:11:14.591 }, 00:11:14.591 "claimed": true, 00:11:14.591 "claim_type": "exclusive_write", 00:11:14.591 "zoned": false, 00:11:14.591 "supported_io_types": { 00:11:14.591 "read": true, 00:11:14.591 "write": true, 00:11:14.591 "unmap": true, 00:11:14.591 "flush": true, 00:11:14.591 "reset": true, 00:11:14.591 "nvme_admin": false, 00:11:14.591 "nvme_io": false, 00:11:14.591 "nvme_io_md": false, 00:11:14.592 "write_zeroes": true, 00:11:14.592 "zcopy": true, 00:11:14.592 "get_zone_info": false, 00:11:14.592 "zone_management": false, 00:11:14.592 "zone_append": false, 00:11:14.592 "compare": false, 00:11:14.592 "compare_and_write": false, 00:11:14.592 "abort": true, 00:11:14.592 "seek_hole": false, 00:11:14.592 "seek_data": false, 00:11:14.592 "copy": true, 00:11:14.592 "nvme_iov_md": false 00:11:14.592 }, 00:11:14.592 "memory_domains": [ 00:11:14.592 { 00:11:14.592 "dma_device_id": "system", 00:11:14.592 "dma_device_type": 1 00:11:14.592 }, 00:11:14.592 { 00:11:14.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.592 "dma_device_type": 2 00:11:14.592 } 00:11:14.592 ], 00:11:14.592 "driver_specific": {} 00:11:14.592 } 00:11:14.592 ] 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.592 "name": "Existed_Raid", 00:11:14.592 "uuid": "33fb705f-7ffa-4064-a3f6-a72216db8d20", 00:11:14.592 "strip_size_kb": 64, 00:11:14.592 "state": "online", 00:11:14.592 "raid_level": "concat", 00:11:14.592 "superblock": true, 00:11:14.592 "num_base_bdevs": 3, 00:11:14.592 "num_base_bdevs_discovered": 3, 00:11:14.592 "num_base_bdevs_operational": 3, 00:11:14.592 "base_bdevs_list": [ 00:11:14.592 { 00:11:14.592 "name": "NewBaseBdev", 00:11:14.592 "uuid": "e4a66e4c-d01c-411b-96d6-13d5a50fa31a", 00:11:14.592 "is_configured": true, 00:11:14.592 "data_offset": 2048, 00:11:14.592 "data_size": 63488 00:11:14.592 }, 00:11:14.592 { 00:11:14.592 "name": "BaseBdev2", 00:11:14.592 "uuid": "ff50a1bd-9747-4401-9c6f-79afd80c9dbd", 00:11:14.592 "is_configured": true, 00:11:14.592 "data_offset": 2048, 00:11:14.592 "data_size": 63488 00:11:14.592 }, 00:11:14.592 { 00:11:14.592 "name": "BaseBdev3", 00:11:14.592 "uuid": "ec8b2709-7c50-4cc3-8736-68f9c1ed89b6", 00:11:14.592 "is_configured": true, 00:11:14.592 "data_offset": 2048, 00:11:14.592 "data_size": 63488 00:11:14.592 } 00:11:14.592 ] 00:11:14.592 }' 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.592 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.158 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:15.158 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:15.158 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.158 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.158 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.158 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.158 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:15.158 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.158 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.158 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.159 [2024-10-17 20:08:00.628217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.159 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.159 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.159 "name": "Existed_Raid", 00:11:15.159 "aliases": [ 00:11:15.159 "33fb705f-7ffa-4064-a3f6-a72216db8d20" 00:11:15.159 ], 00:11:15.159 "product_name": "Raid Volume", 00:11:15.159 "block_size": 512, 00:11:15.159 "num_blocks": 190464, 00:11:15.159 "uuid": "33fb705f-7ffa-4064-a3f6-a72216db8d20", 00:11:15.159 "assigned_rate_limits": { 00:11:15.159 "rw_ios_per_sec": 0, 00:11:15.159 "rw_mbytes_per_sec": 0, 00:11:15.159 "r_mbytes_per_sec": 0, 00:11:15.159 "w_mbytes_per_sec": 0 00:11:15.159 }, 00:11:15.159 "claimed": false, 00:11:15.159 "zoned": false, 00:11:15.159 "supported_io_types": { 00:11:15.159 "read": true, 00:11:15.159 "write": true, 00:11:15.159 "unmap": true, 00:11:15.159 "flush": true, 00:11:15.159 "reset": true, 00:11:15.159 "nvme_admin": false, 00:11:15.159 "nvme_io": false, 00:11:15.159 "nvme_io_md": false, 00:11:15.159 "write_zeroes": true, 00:11:15.159 "zcopy": false, 00:11:15.159 "get_zone_info": false, 00:11:15.159 "zone_management": false, 00:11:15.159 "zone_append": false, 00:11:15.159 "compare": false, 00:11:15.159 "compare_and_write": false, 00:11:15.159 "abort": false, 00:11:15.159 "seek_hole": false, 00:11:15.159 "seek_data": false, 00:11:15.159 "copy": false, 00:11:15.159 "nvme_iov_md": false 00:11:15.159 }, 00:11:15.159 "memory_domains": [ 00:11:15.159 { 00:11:15.159 "dma_device_id": "system", 00:11:15.159 "dma_device_type": 1 00:11:15.159 }, 00:11:15.159 { 00:11:15.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.159 "dma_device_type": 2 00:11:15.159 }, 00:11:15.159 { 00:11:15.159 "dma_device_id": "system", 00:11:15.159 "dma_device_type": 1 00:11:15.159 }, 00:11:15.159 { 00:11:15.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.159 "dma_device_type": 2 00:11:15.159 }, 00:11:15.159 { 00:11:15.159 "dma_device_id": "system", 00:11:15.159 "dma_device_type": 1 00:11:15.159 }, 00:11:15.159 { 00:11:15.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.159 "dma_device_type": 2 00:11:15.159 } 00:11:15.159 ], 00:11:15.159 "driver_specific": { 00:11:15.159 "raid": { 00:11:15.159 "uuid": "33fb705f-7ffa-4064-a3f6-a72216db8d20", 00:11:15.159 "strip_size_kb": 64, 00:11:15.159 "state": "online", 00:11:15.159 "raid_level": "concat", 00:11:15.159 "superblock": true, 00:11:15.159 "num_base_bdevs": 3, 00:11:15.159 "num_base_bdevs_discovered": 3, 00:11:15.159 "num_base_bdevs_operational": 3, 00:11:15.159 "base_bdevs_list": [ 00:11:15.159 { 00:11:15.159 "name": "NewBaseBdev", 00:11:15.159 "uuid": "e4a66e4c-d01c-411b-96d6-13d5a50fa31a", 00:11:15.159 "is_configured": true, 00:11:15.159 "data_offset": 2048, 00:11:15.159 "data_size": 63488 00:11:15.159 }, 00:11:15.159 { 00:11:15.159 "name": "BaseBdev2", 00:11:15.159 "uuid": "ff50a1bd-9747-4401-9c6f-79afd80c9dbd", 00:11:15.159 "is_configured": true, 00:11:15.159 "data_offset": 2048, 00:11:15.159 "data_size": 63488 00:11:15.159 }, 00:11:15.159 { 00:11:15.159 "name": "BaseBdev3", 00:11:15.159 "uuid": "ec8b2709-7c50-4cc3-8736-68f9c1ed89b6", 00:11:15.159 "is_configured": true, 00:11:15.159 "data_offset": 2048, 00:11:15.159 "data_size": 63488 00:11:15.159 } 00:11:15.159 ] 00:11:15.159 } 00:11:15.159 } 00:11:15.159 }' 00:11:15.159 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.159 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:15.159 BaseBdev2 00:11:15.159 BaseBdev3' 00:11:15.159 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.159 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.159 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.159 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:15.159 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.159 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.159 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.159 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.418 [2024-10-17 20:08:00.943899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:15.418 [2024-10-17 20:08:00.943933] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.418 [2024-10-17 20:08:00.944023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.418 [2024-10-17 20:08:00.944258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.418 [2024-10-17 20:08:00.944336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66161 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66161 ']' 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66161 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66161 00:11:15.418 killing process with pid 66161 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66161' 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66161 00:11:15.418 [2024-10-17 20:08:00.982325] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:15.418 20:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66161 00:11:15.677 [2024-10-17 20:08:01.233210] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.612 ************************************ 00:11:16.612 END TEST raid_state_function_test_sb 00:11:16.612 ************************************ 00:11:16.612 20:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:16.612 00:11:16.612 real 0m11.902s 00:11:16.612 user 0m19.832s 00:11:16.612 sys 0m1.698s 00:11:16.612 20:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.612 20:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.870 20:08:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:16.870 20:08:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:16.870 20:08:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.870 20:08:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.871 ************************************ 00:11:16.871 START TEST raid_superblock_test 00:11:16.871 ************************************ 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:16.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66798 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66798 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 66798 ']' 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.871 20:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.871 [2024-10-17 20:08:02.387823] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:11:16.871 [2024-10-17 20:08:02.388298] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66798 ] 00:11:17.196 [2024-10-17 20:08:02.561649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.197 [2024-10-17 20:08:02.674560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.454 [2024-10-17 20:08:02.870163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.454 [2024-10-17 20:08:02.870390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.713 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.713 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:17.713 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:17.713 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.713 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:17.713 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:17.713 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:17.713 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.713 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.713 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.713 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:17.713 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.713 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.972 malloc1 00:11:17.972 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.972 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:17.972 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.972 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.972 [2024-10-17 20:08:03.380315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:17.972 [2024-10-17 20:08:03.380412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.972 [2024-10-17 20:08:03.380476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:17.972 [2024-10-17 20:08:03.380491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.972 [2024-10-17 20:08:03.383293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.972 [2024-10-17 20:08:03.383334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:17.972 pt1 00:11:17.972 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.972 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.972 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.972 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:17.972 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:17.972 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:17.972 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.973 malloc2 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.973 [2024-10-17 20:08:03.429363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:17.973 [2024-10-17 20:08:03.429442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.973 [2024-10-17 20:08:03.429488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:17.973 [2024-10-17 20:08:03.429502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.973 [2024-10-17 20:08:03.432495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.973 [2024-10-17 20:08:03.432537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:17.973 pt2 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.973 malloc3 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.973 [2024-10-17 20:08:03.497084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:17.973 [2024-10-17 20:08:03.497163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.973 [2024-10-17 20:08:03.497195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:17.973 [2024-10-17 20:08:03.497210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.973 [2024-10-17 20:08:03.499957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.973 [2024-10-17 20:08:03.500045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:17.973 pt3 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.973 [2024-10-17 20:08:03.509141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:17.973 [2024-10-17 20:08:03.511624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.973 [2024-10-17 20:08:03.511712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:17.973 [2024-10-17 20:08:03.511904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:17.973 [2024-10-17 20:08:03.511925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:17.973 [2024-10-17 20:08:03.512250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:17.973 [2024-10-17 20:08:03.512480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:17.973 [2024-10-17 20:08:03.512495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:17.973 [2024-10-17 20:08:03.512652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.973 "name": "raid_bdev1", 00:11:17.973 "uuid": "d9f38c93-1083-4215-96d4-93f6f5b6b49d", 00:11:17.973 "strip_size_kb": 64, 00:11:17.973 "state": "online", 00:11:17.973 "raid_level": "concat", 00:11:17.973 "superblock": true, 00:11:17.973 "num_base_bdevs": 3, 00:11:17.973 "num_base_bdevs_discovered": 3, 00:11:17.973 "num_base_bdevs_operational": 3, 00:11:17.973 "base_bdevs_list": [ 00:11:17.973 { 00:11:17.973 "name": "pt1", 00:11:17.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.973 "is_configured": true, 00:11:17.973 "data_offset": 2048, 00:11:17.973 "data_size": 63488 00:11:17.973 }, 00:11:17.973 { 00:11:17.973 "name": "pt2", 00:11:17.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.973 "is_configured": true, 00:11:17.973 "data_offset": 2048, 00:11:17.973 "data_size": 63488 00:11:17.973 }, 00:11:17.973 { 00:11:17.973 "name": "pt3", 00:11:17.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.973 "is_configured": true, 00:11:17.973 "data_offset": 2048, 00:11:17.973 "data_size": 63488 00:11:17.973 } 00:11:17.973 ] 00:11:17.973 }' 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.973 20:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.541 [2024-10-17 20:08:04.069863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.541 "name": "raid_bdev1", 00:11:18.541 "aliases": [ 00:11:18.541 "d9f38c93-1083-4215-96d4-93f6f5b6b49d" 00:11:18.541 ], 00:11:18.541 "product_name": "Raid Volume", 00:11:18.541 "block_size": 512, 00:11:18.541 "num_blocks": 190464, 00:11:18.541 "uuid": "d9f38c93-1083-4215-96d4-93f6f5b6b49d", 00:11:18.541 "assigned_rate_limits": { 00:11:18.541 "rw_ios_per_sec": 0, 00:11:18.541 "rw_mbytes_per_sec": 0, 00:11:18.541 "r_mbytes_per_sec": 0, 00:11:18.541 "w_mbytes_per_sec": 0 00:11:18.541 }, 00:11:18.541 "claimed": false, 00:11:18.541 "zoned": false, 00:11:18.541 "supported_io_types": { 00:11:18.541 "read": true, 00:11:18.541 "write": true, 00:11:18.541 "unmap": true, 00:11:18.541 "flush": true, 00:11:18.541 "reset": true, 00:11:18.541 "nvme_admin": false, 00:11:18.541 "nvme_io": false, 00:11:18.541 "nvme_io_md": false, 00:11:18.541 "write_zeroes": true, 00:11:18.541 "zcopy": false, 00:11:18.541 "get_zone_info": false, 00:11:18.541 "zone_management": false, 00:11:18.541 "zone_append": false, 00:11:18.541 "compare": false, 00:11:18.541 "compare_and_write": false, 00:11:18.541 "abort": false, 00:11:18.541 "seek_hole": false, 00:11:18.541 "seek_data": false, 00:11:18.541 "copy": false, 00:11:18.541 "nvme_iov_md": false 00:11:18.541 }, 00:11:18.541 "memory_domains": [ 00:11:18.541 { 00:11:18.541 "dma_device_id": "system", 00:11:18.541 "dma_device_type": 1 00:11:18.541 }, 00:11:18.541 { 00:11:18.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.541 "dma_device_type": 2 00:11:18.541 }, 00:11:18.541 { 00:11:18.541 "dma_device_id": "system", 00:11:18.541 "dma_device_type": 1 00:11:18.541 }, 00:11:18.541 { 00:11:18.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.541 "dma_device_type": 2 00:11:18.541 }, 00:11:18.541 { 00:11:18.541 "dma_device_id": "system", 00:11:18.541 "dma_device_type": 1 00:11:18.541 }, 00:11:18.541 { 00:11:18.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.541 "dma_device_type": 2 00:11:18.541 } 00:11:18.541 ], 00:11:18.541 "driver_specific": { 00:11:18.541 "raid": { 00:11:18.541 "uuid": "d9f38c93-1083-4215-96d4-93f6f5b6b49d", 00:11:18.541 "strip_size_kb": 64, 00:11:18.541 "state": "online", 00:11:18.541 "raid_level": "concat", 00:11:18.541 "superblock": true, 00:11:18.541 "num_base_bdevs": 3, 00:11:18.541 "num_base_bdevs_discovered": 3, 00:11:18.541 "num_base_bdevs_operational": 3, 00:11:18.541 "base_bdevs_list": [ 00:11:18.541 { 00:11:18.541 "name": "pt1", 00:11:18.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.541 "is_configured": true, 00:11:18.541 "data_offset": 2048, 00:11:18.541 "data_size": 63488 00:11:18.541 }, 00:11:18.541 { 00:11:18.541 "name": "pt2", 00:11:18.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.541 "is_configured": true, 00:11:18.541 "data_offset": 2048, 00:11:18.541 "data_size": 63488 00:11:18.541 }, 00:11:18.541 { 00:11:18.541 "name": "pt3", 00:11:18.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.541 "is_configured": true, 00:11:18.541 "data_offset": 2048, 00:11:18.541 "data_size": 63488 00:11:18.541 } 00:11:18.541 ] 00:11:18.541 } 00:11:18.541 } 00:11:18.541 }' 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:18.541 pt2 00:11:18.541 pt3' 00:11:18.541 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.800 [2024-10-17 20:08:04.401747] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d9f38c93-1083-4215-96d4-93f6f5b6b49d 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d9f38c93-1083-4215-96d4-93f6f5b6b49d ']' 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.800 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.059 [2024-10-17 20:08:04.453720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.059 [2024-10-17 20:08:04.453758] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.059 [2024-10-17 20:08:04.453857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.059 [2024-10-17 20:08:04.453946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.059 [2024-10-17 20:08:04.453962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:19.059 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.060 [2024-10-17 20:08:04.605786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:19.060 [2024-10-17 20:08:04.608537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:19.060 [2024-10-17 20:08:04.608609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:19.060 [2024-10-17 20:08:04.608680] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:19.060 [2024-10-17 20:08:04.608746] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:19.060 [2024-10-17 20:08:04.608778] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:19.060 [2024-10-17 20:08:04.608804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.060 [2024-10-17 20:08:04.608820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:19.060 request: 00:11:19.060 { 00:11:19.060 "name": "raid_bdev1", 00:11:19.060 "raid_level": "concat", 00:11:19.060 "base_bdevs": [ 00:11:19.060 "malloc1", 00:11:19.060 "malloc2", 00:11:19.060 "malloc3" 00:11:19.060 ], 00:11:19.060 "strip_size_kb": 64, 00:11:19.060 "superblock": false, 00:11:19.060 "method": "bdev_raid_create", 00:11:19.060 "req_id": 1 00:11:19.060 } 00:11:19.060 Got JSON-RPC error response 00:11:19.060 response: 00:11:19.060 { 00:11:19.060 "code": -17, 00:11:19.060 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:19.060 } 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.060 [2024-10-17 20:08:04.669794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:19.060 [2024-10-17 20:08:04.669877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.060 [2024-10-17 20:08:04.669911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:19.060 [2024-10-17 20:08:04.669926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.060 [2024-10-17 20:08:04.672983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.060 [2024-10-17 20:08:04.673172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:19.060 [2024-10-17 20:08:04.673303] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:19.060 [2024-10-17 20:08:04.673396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:19.060 pt1 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.060 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.319 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.319 "name": "raid_bdev1", 00:11:19.319 "uuid": "d9f38c93-1083-4215-96d4-93f6f5b6b49d", 00:11:19.319 "strip_size_kb": 64, 00:11:19.319 "state": "configuring", 00:11:19.319 "raid_level": "concat", 00:11:19.319 "superblock": true, 00:11:19.319 "num_base_bdevs": 3, 00:11:19.319 "num_base_bdevs_discovered": 1, 00:11:19.319 "num_base_bdevs_operational": 3, 00:11:19.319 "base_bdevs_list": [ 00:11:19.319 { 00:11:19.319 "name": "pt1", 00:11:19.319 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.319 "is_configured": true, 00:11:19.319 "data_offset": 2048, 00:11:19.319 "data_size": 63488 00:11:19.319 }, 00:11:19.319 { 00:11:19.319 "name": null, 00:11:19.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.319 "is_configured": false, 00:11:19.319 "data_offset": 2048, 00:11:19.319 "data_size": 63488 00:11:19.319 }, 00:11:19.319 { 00:11:19.319 "name": null, 00:11:19.319 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.319 "is_configured": false, 00:11:19.319 "data_offset": 2048, 00:11:19.319 "data_size": 63488 00:11:19.319 } 00:11:19.319 ] 00:11:19.319 }' 00:11:19.319 20:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.319 20:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.578 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:19.578 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:19.578 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.578 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.578 [2024-10-17 20:08:05.217968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:19.578 [2024-10-17 20:08:05.218090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.578 [2024-10-17 20:08:05.218128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:19.578 [2024-10-17 20:08:05.218145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.578 [2024-10-17 20:08:05.218780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.578 [2024-10-17 20:08:05.218821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:19.578 [2024-10-17 20:08:05.218932] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:19.578 [2024-10-17 20:08:05.218965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:19.578 pt2 00:11:19.578 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.578 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:19.578 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.578 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.578 [2024-10-17 20:08:05.225924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.836 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.836 "name": "raid_bdev1", 00:11:19.836 "uuid": "d9f38c93-1083-4215-96d4-93f6f5b6b49d", 00:11:19.836 "strip_size_kb": 64, 00:11:19.836 "state": "configuring", 00:11:19.836 "raid_level": "concat", 00:11:19.836 "superblock": true, 00:11:19.836 "num_base_bdevs": 3, 00:11:19.836 "num_base_bdevs_discovered": 1, 00:11:19.836 "num_base_bdevs_operational": 3, 00:11:19.836 "base_bdevs_list": [ 00:11:19.836 { 00:11:19.836 "name": "pt1", 00:11:19.837 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.837 "is_configured": true, 00:11:19.837 "data_offset": 2048, 00:11:19.837 "data_size": 63488 00:11:19.837 }, 00:11:19.837 { 00:11:19.837 "name": null, 00:11:19.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.837 "is_configured": false, 00:11:19.837 "data_offset": 0, 00:11:19.837 "data_size": 63488 00:11:19.837 }, 00:11:19.837 { 00:11:19.837 "name": null, 00:11:19.837 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.837 "is_configured": false, 00:11:19.837 "data_offset": 2048, 00:11:19.837 "data_size": 63488 00:11:19.837 } 00:11:19.837 ] 00:11:19.837 }' 00:11:19.837 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.837 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.404 [2024-10-17 20:08:05.798139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:20.404 [2024-10-17 20:08:05.798239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.404 [2024-10-17 20:08:05.798269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:20.404 [2024-10-17 20:08:05.798287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.404 [2024-10-17 20:08:05.798851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.404 [2024-10-17 20:08:05.798879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:20.404 [2024-10-17 20:08:05.798975] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:20.404 [2024-10-17 20:08:05.799042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:20.404 pt2 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.404 [2024-10-17 20:08:05.810129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:20.404 [2024-10-17 20:08:05.810230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.404 [2024-10-17 20:08:05.810252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:20.404 [2024-10-17 20:08:05.810268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.404 [2024-10-17 20:08:05.810789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.404 [2024-10-17 20:08:05.810827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:20.404 [2024-10-17 20:08:05.810898] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:20.404 [2024-10-17 20:08:05.810944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:20.404 [2024-10-17 20:08:05.811131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:20.404 [2024-10-17 20:08:05.811150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:20.404 [2024-10-17 20:08:05.811506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:20.404 [2024-10-17 20:08:05.811702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:20.404 [2024-10-17 20:08:05.811716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:20.404 [2024-10-17 20:08:05.811882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.404 pt3 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.404 "name": "raid_bdev1", 00:11:20.404 "uuid": "d9f38c93-1083-4215-96d4-93f6f5b6b49d", 00:11:20.404 "strip_size_kb": 64, 00:11:20.404 "state": "online", 00:11:20.404 "raid_level": "concat", 00:11:20.404 "superblock": true, 00:11:20.404 "num_base_bdevs": 3, 00:11:20.404 "num_base_bdevs_discovered": 3, 00:11:20.404 "num_base_bdevs_operational": 3, 00:11:20.404 "base_bdevs_list": [ 00:11:20.404 { 00:11:20.404 "name": "pt1", 00:11:20.404 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.404 "is_configured": true, 00:11:20.404 "data_offset": 2048, 00:11:20.404 "data_size": 63488 00:11:20.404 }, 00:11:20.404 { 00:11:20.404 "name": "pt2", 00:11:20.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.404 "is_configured": true, 00:11:20.404 "data_offset": 2048, 00:11:20.404 "data_size": 63488 00:11:20.404 }, 00:11:20.404 { 00:11:20.404 "name": "pt3", 00:11:20.404 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.404 "is_configured": true, 00:11:20.404 "data_offset": 2048, 00:11:20.404 "data_size": 63488 00:11:20.404 } 00:11:20.404 ] 00:11:20.404 }' 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.404 20:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.971 [2024-10-17 20:08:06.358734] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:20.971 "name": "raid_bdev1", 00:11:20.971 "aliases": [ 00:11:20.971 "d9f38c93-1083-4215-96d4-93f6f5b6b49d" 00:11:20.971 ], 00:11:20.971 "product_name": "Raid Volume", 00:11:20.971 "block_size": 512, 00:11:20.971 "num_blocks": 190464, 00:11:20.971 "uuid": "d9f38c93-1083-4215-96d4-93f6f5b6b49d", 00:11:20.971 "assigned_rate_limits": { 00:11:20.971 "rw_ios_per_sec": 0, 00:11:20.971 "rw_mbytes_per_sec": 0, 00:11:20.971 "r_mbytes_per_sec": 0, 00:11:20.971 "w_mbytes_per_sec": 0 00:11:20.971 }, 00:11:20.971 "claimed": false, 00:11:20.971 "zoned": false, 00:11:20.971 "supported_io_types": { 00:11:20.971 "read": true, 00:11:20.971 "write": true, 00:11:20.971 "unmap": true, 00:11:20.971 "flush": true, 00:11:20.971 "reset": true, 00:11:20.971 "nvme_admin": false, 00:11:20.971 "nvme_io": false, 00:11:20.971 "nvme_io_md": false, 00:11:20.971 "write_zeroes": true, 00:11:20.971 "zcopy": false, 00:11:20.971 "get_zone_info": false, 00:11:20.971 "zone_management": false, 00:11:20.971 "zone_append": false, 00:11:20.971 "compare": false, 00:11:20.971 "compare_and_write": false, 00:11:20.971 "abort": false, 00:11:20.971 "seek_hole": false, 00:11:20.971 "seek_data": false, 00:11:20.971 "copy": false, 00:11:20.971 "nvme_iov_md": false 00:11:20.971 }, 00:11:20.971 "memory_domains": [ 00:11:20.971 { 00:11:20.971 "dma_device_id": "system", 00:11:20.971 "dma_device_type": 1 00:11:20.971 }, 00:11:20.971 { 00:11:20.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.971 "dma_device_type": 2 00:11:20.971 }, 00:11:20.971 { 00:11:20.971 "dma_device_id": "system", 00:11:20.971 "dma_device_type": 1 00:11:20.971 }, 00:11:20.971 { 00:11:20.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.971 "dma_device_type": 2 00:11:20.971 }, 00:11:20.971 { 00:11:20.971 "dma_device_id": "system", 00:11:20.971 "dma_device_type": 1 00:11:20.971 }, 00:11:20.971 { 00:11:20.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.971 "dma_device_type": 2 00:11:20.971 } 00:11:20.971 ], 00:11:20.971 "driver_specific": { 00:11:20.971 "raid": { 00:11:20.971 "uuid": "d9f38c93-1083-4215-96d4-93f6f5b6b49d", 00:11:20.971 "strip_size_kb": 64, 00:11:20.971 "state": "online", 00:11:20.971 "raid_level": "concat", 00:11:20.971 "superblock": true, 00:11:20.971 "num_base_bdevs": 3, 00:11:20.971 "num_base_bdevs_discovered": 3, 00:11:20.971 "num_base_bdevs_operational": 3, 00:11:20.971 "base_bdevs_list": [ 00:11:20.971 { 00:11:20.971 "name": "pt1", 00:11:20.971 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.971 "is_configured": true, 00:11:20.971 "data_offset": 2048, 00:11:20.971 "data_size": 63488 00:11:20.971 }, 00:11:20.971 { 00:11:20.971 "name": "pt2", 00:11:20.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.971 "is_configured": true, 00:11:20.971 "data_offset": 2048, 00:11:20.971 "data_size": 63488 00:11:20.971 }, 00:11:20.971 { 00:11:20.971 "name": "pt3", 00:11:20.971 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.971 "is_configured": true, 00:11:20.971 "data_offset": 2048, 00:11:20.971 "data_size": 63488 00:11:20.971 } 00:11:20.971 ] 00:11:20.971 } 00:11:20.971 } 00:11:20.971 }' 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:20.971 pt2 00:11:20.971 pt3' 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.971 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.230 [2024-10-17 20:08:06.698790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d9f38c93-1083-4215-96d4-93f6f5b6b49d '!=' d9f38c93-1083-4215-96d4-93f6f5b6b49d ']' 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66798 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 66798 ']' 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 66798 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66798 00:11:21.230 killing process with pid 66798 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66798' 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 66798 00:11:21.230 [2024-10-17 20:08:06.776168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.230 20:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 66798 00:11:21.230 [2024-10-17 20:08:06.776282] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.230 [2024-10-17 20:08:06.776386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.230 [2024-10-17 20:08:06.776405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:21.488 [2024-10-17 20:08:07.035889] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.421 20:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:22.421 00:11:22.421 real 0m5.696s 00:11:22.421 user 0m8.671s 00:11:22.421 sys 0m0.840s 00:11:22.421 ************************************ 00:11:22.421 END TEST raid_superblock_test 00:11:22.421 ************************************ 00:11:22.421 20:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.421 20:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.421 20:08:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:22.421 20:08:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:22.421 20:08:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.421 20:08:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.421 ************************************ 00:11:22.421 START TEST raid_read_error_test 00:11:22.421 ************************************ 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.W6643MVky2 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67062 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67062 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 67062 ']' 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:22.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.421 20:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.678 [2024-10-17 20:08:08.162079] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:11:22.678 [2024-10-17 20:08:08.162292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67062 ] 00:11:22.937 [2024-10-17 20:08:08.342676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.937 [2024-10-17 20:08:08.500753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.195 [2024-10-17 20:08:08.693374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.195 [2024-10-17 20:08:08.693684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.763 BaseBdev1_malloc 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.763 true 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.763 [2024-10-17 20:08:09.218787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:23.763 [2024-10-17 20:08:09.218868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.763 [2024-10-17 20:08:09.218899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:23.763 [2024-10-17 20:08:09.218932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.763 [2024-10-17 20:08:09.221808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.763 [2024-10-17 20:08:09.221866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:23.763 BaseBdev1 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.763 BaseBdev2_malloc 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.763 true 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.763 [2024-10-17 20:08:09.281789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:23.763 [2024-10-17 20:08:09.282101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.763 [2024-10-17 20:08:09.282143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:23.763 [2024-10-17 20:08:09.282162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.763 [2024-10-17 20:08:09.285124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.763 [2024-10-17 20:08:09.285186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:23.763 BaseBdev2 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.763 BaseBdev3_malloc 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.763 true 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.763 [2024-10-17 20:08:09.362188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:23.763 [2024-10-17 20:08:09.362273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.763 [2024-10-17 20:08:09.362300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:23.763 [2024-10-17 20:08:09.362319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.763 [2024-10-17 20:08:09.365252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.763 [2024-10-17 20:08:09.365315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:23.763 BaseBdev3 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.763 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.764 [2024-10-17 20:08:09.370328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.764 [2024-10-17 20:08:09.372837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.764 [2024-10-17 20:08:09.372940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.764 [2024-10-17 20:08:09.373241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:23.764 [2024-10-17 20:08:09.373261] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:23.764 [2024-10-17 20:08:09.373648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:23.764 [2024-10-17 20:08:09.373877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:23.764 [2024-10-17 20:08:09.373899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:23.764 [2024-10-17 20:08:09.374104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.764 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.022 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.022 "name": "raid_bdev1", 00:11:24.022 "uuid": "fd695475-ec0f-497a-83c6-177207257ccc", 00:11:24.022 "strip_size_kb": 64, 00:11:24.022 "state": "online", 00:11:24.022 "raid_level": "concat", 00:11:24.022 "superblock": true, 00:11:24.022 "num_base_bdevs": 3, 00:11:24.022 "num_base_bdevs_discovered": 3, 00:11:24.022 "num_base_bdevs_operational": 3, 00:11:24.022 "base_bdevs_list": [ 00:11:24.022 { 00:11:24.022 "name": "BaseBdev1", 00:11:24.022 "uuid": "be27968d-54ac-5113-b5c4-b201eb521304", 00:11:24.022 "is_configured": true, 00:11:24.022 "data_offset": 2048, 00:11:24.022 "data_size": 63488 00:11:24.022 }, 00:11:24.022 { 00:11:24.022 "name": "BaseBdev2", 00:11:24.022 "uuid": "41bbaac8-f795-5c01-b6b0-65f6cb65b942", 00:11:24.022 "is_configured": true, 00:11:24.022 "data_offset": 2048, 00:11:24.022 "data_size": 63488 00:11:24.022 }, 00:11:24.022 { 00:11:24.022 "name": "BaseBdev3", 00:11:24.022 "uuid": "3d7c2dee-3270-51ea-9183-f56584a73381", 00:11:24.022 "is_configured": true, 00:11:24.022 "data_offset": 2048, 00:11:24.022 "data_size": 63488 00:11:24.022 } 00:11:24.022 ] 00:11:24.022 }' 00:11:24.022 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.022 20:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.280 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:24.280 20:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:24.538 [2024-10-17 20:08:10.016064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.471 "name": "raid_bdev1", 00:11:25.471 "uuid": "fd695475-ec0f-497a-83c6-177207257ccc", 00:11:25.471 "strip_size_kb": 64, 00:11:25.471 "state": "online", 00:11:25.471 "raid_level": "concat", 00:11:25.471 "superblock": true, 00:11:25.471 "num_base_bdevs": 3, 00:11:25.471 "num_base_bdevs_discovered": 3, 00:11:25.471 "num_base_bdevs_operational": 3, 00:11:25.471 "base_bdevs_list": [ 00:11:25.471 { 00:11:25.471 "name": "BaseBdev1", 00:11:25.471 "uuid": "be27968d-54ac-5113-b5c4-b201eb521304", 00:11:25.471 "is_configured": true, 00:11:25.471 "data_offset": 2048, 00:11:25.471 "data_size": 63488 00:11:25.471 }, 00:11:25.471 { 00:11:25.471 "name": "BaseBdev2", 00:11:25.471 "uuid": "41bbaac8-f795-5c01-b6b0-65f6cb65b942", 00:11:25.471 "is_configured": true, 00:11:25.471 "data_offset": 2048, 00:11:25.471 "data_size": 63488 00:11:25.471 }, 00:11:25.471 { 00:11:25.471 "name": "BaseBdev3", 00:11:25.471 "uuid": "3d7c2dee-3270-51ea-9183-f56584a73381", 00:11:25.471 "is_configured": true, 00:11:25.471 "data_offset": 2048, 00:11:25.471 "data_size": 63488 00:11:25.471 } 00:11:25.471 ] 00:11:25.471 }' 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.471 20:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.037 [2024-10-17 20:08:11.439120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.037 [2024-10-17 20:08:11.439152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.037 [2024-10-17 20:08:11.442798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.037 [2024-10-17 20:08:11.443008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.037 [2024-10-17 20:08:11.443117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.037 [2024-10-17 20:08:11.443366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:26.037 { 00:11:26.037 "results": [ 00:11:26.037 { 00:11:26.037 "job": "raid_bdev1", 00:11:26.037 "core_mask": "0x1", 00:11:26.037 "workload": "randrw", 00:11:26.037 "percentage": 50, 00:11:26.037 "status": "finished", 00:11:26.037 "queue_depth": 1, 00:11:26.037 "io_size": 131072, 00:11:26.037 "runtime": 1.419568, 00:11:26.037 "iops": 11123.806679215086, 00:11:26.037 "mibps": 1390.4758349018857, 00:11:26.037 "io_failed": 1, 00:11:26.037 "io_timeout": 0, 00:11:26.037 "avg_latency_us": 125.88786589297227, 00:11:26.037 "min_latency_us": 36.305454545454545, 00:11:26.037 "max_latency_us": 1846.9236363636364 00:11:26.037 } 00:11:26.037 ], 00:11:26.037 "core_count": 1 00:11:26.037 } 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67062 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 67062 ']' 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 67062 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67062 00:11:26.037 killing process with pid 67062 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67062' 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 67062 00:11:26.037 [2024-10-17 20:08:11.482794] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.037 20:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 67062 00:11:26.037 [2024-10-17 20:08:11.679568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.414 20:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.W6643MVky2 00:11:27.414 20:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:27.414 20:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:27.414 20:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:27.414 20:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:27.414 20:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.414 20:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:27.414 20:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:27.414 00:11:27.414 real 0m4.622s 00:11:27.414 user 0m5.800s 00:11:27.414 sys 0m0.586s 00:11:27.414 ************************************ 00:11:27.414 END TEST raid_read_error_test 00:11:27.414 ************************************ 00:11:27.414 20:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.414 20:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.414 20:08:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:27.414 20:08:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:27.414 20:08:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.414 20:08:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:27.414 ************************************ 00:11:27.414 START TEST raid_write_error_test 00:11:27.414 ************************************ 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:27.414 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:27.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VbqRyjKtaR 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67202 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67202 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67202 ']' 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:27.415 20:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.415 [2024-10-17 20:08:12.835942] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:11:27.415 [2024-10-17 20:08:12.836170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67202 ] 00:11:27.415 [2024-10-17 20:08:13.012855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.674 [2024-10-17 20:08:13.143957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.933 [2024-10-17 20:08:13.332390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.933 [2024-10-17 20:08:13.332463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.191 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.191 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:28.191 20:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:28.191 20:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:28.191 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.191 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.191 BaseBdev1_malloc 00:11:28.191 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.191 20:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:28.191 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.191 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.456 true 00:11:28.456 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.456 20:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:28.456 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.456 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.456 [2024-10-17 20:08:13.858056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:28.457 [2024-10-17 20:08:13.858156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.457 [2024-10-17 20:08:13.858203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:28.457 [2024-10-17 20:08:13.858222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.457 [2024-10-17 20:08:13.861244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.457 [2024-10-17 20:08:13.861292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:28.457 BaseBdev1 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 BaseBdev2_malloc 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 true 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 [2024-10-17 20:08:13.924405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:28.457 [2024-10-17 20:08:13.924773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.457 [2024-10-17 20:08:13.924819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:28.457 [2024-10-17 20:08:13.924841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.457 [2024-10-17 20:08:13.927865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.457 [2024-10-17 20:08:13.928118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:28.457 BaseBdev2 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 BaseBdev3_malloc 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 true 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 20:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 [2024-10-17 20:08:13.995682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:28.457 [2024-10-17 20:08:13.995763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.457 [2024-10-17 20:08:13.995801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:28.457 [2024-10-17 20:08:13.995818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.457 [2024-10-17 20:08:13.998826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.457 [2024-10-17 20:08:13.998890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:28.457 BaseBdev3 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 [2024-10-17 20:08:14.007777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.457 [2024-10-17 20:08:14.010355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.457 [2024-10-17 20:08:14.010460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.457 [2024-10-17 20:08:14.010704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:28.457 [2024-10-17 20:08:14.010724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:28.457 [2024-10-17 20:08:14.011068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:28.457 [2024-10-17 20:08:14.011333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:28.457 [2024-10-17 20:08:14.011373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:28.457 [2024-10-17 20:08:14.011598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.457 "name": "raid_bdev1", 00:11:28.457 "uuid": "d6dceef1-e1e0-42e6-bc57-96d7ed77b258", 00:11:28.457 "strip_size_kb": 64, 00:11:28.457 "state": "online", 00:11:28.457 "raid_level": "concat", 00:11:28.457 "superblock": true, 00:11:28.457 "num_base_bdevs": 3, 00:11:28.457 "num_base_bdevs_discovered": 3, 00:11:28.457 "num_base_bdevs_operational": 3, 00:11:28.457 "base_bdevs_list": [ 00:11:28.457 { 00:11:28.457 "name": "BaseBdev1", 00:11:28.457 "uuid": "c9605eca-f133-55a3-ab14-04dad127a212", 00:11:28.457 "is_configured": true, 00:11:28.457 "data_offset": 2048, 00:11:28.457 "data_size": 63488 00:11:28.457 }, 00:11:28.457 { 00:11:28.457 "name": "BaseBdev2", 00:11:28.457 "uuid": "90621579-f84c-5038-a598-cb9aa2c61b1d", 00:11:28.457 "is_configured": true, 00:11:28.457 "data_offset": 2048, 00:11:28.457 "data_size": 63488 00:11:28.457 }, 00:11:28.457 { 00:11:28.457 "name": "BaseBdev3", 00:11:28.457 "uuid": "d39885d0-5921-5eae-99d6-fbcb061e4843", 00:11:28.457 "is_configured": true, 00:11:28.457 "data_offset": 2048, 00:11:28.457 "data_size": 63488 00:11:28.457 } 00:11:28.457 ] 00:11:28.457 }' 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.457 20:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.051 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:29.051 20:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:29.051 [2024-10-17 20:08:14.613313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.985 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.986 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.986 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.986 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.986 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.986 20:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.986 20:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.986 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.986 20:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.986 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.986 "name": "raid_bdev1", 00:11:29.986 "uuid": "d6dceef1-e1e0-42e6-bc57-96d7ed77b258", 00:11:29.986 "strip_size_kb": 64, 00:11:29.986 "state": "online", 00:11:29.986 "raid_level": "concat", 00:11:29.986 "superblock": true, 00:11:29.986 "num_base_bdevs": 3, 00:11:29.986 "num_base_bdevs_discovered": 3, 00:11:29.986 "num_base_bdevs_operational": 3, 00:11:29.986 "base_bdevs_list": [ 00:11:29.986 { 00:11:29.986 "name": "BaseBdev1", 00:11:29.986 "uuid": "c9605eca-f133-55a3-ab14-04dad127a212", 00:11:29.986 "is_configured": true, 00:11:29.986 "data_offset": 2048, 00:11:29.986 "data_size": 63488 00:11:29.986 }, 00:11:29.986 { 00:11:29.986 "name": "BaseBdev2", 00:11:29.986 "uuid": "90621579-f84c-5038-a598-cb9aa2c61b1d", 00:11:29.986 "is_configured": true, 00:11:29.986 "data_offset": 2048, 00:11:29.986 "data_size": 63488 00:11:29.986 }, 00:11:29.986 { 00:11:29.986 "name": "BaseBdev3", 00:11:29.986 "uuid": "d39885d0-5921-5eae-99d6-fbcb061e4843", 00:11:29.986 "is_configured": true, 00:11:29.986 "data_offset": 2048, 00:11:29.986 "data_size": 63488 00:11:29.986 } 00:11:29.986 ] 00:11:29.986 }' 00:11:29.986 20:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.986 20:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.553 [2024-10-17 20:08:16.080642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:30.553 [2024-10-17 20:08:16.080850] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.553 [2024-10-17 20:08:16.084331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.553 [2024-10-17 20:08:16.084605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.553 [2024-10-17 20:08:16.084675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.553 [2024-10-17 20:08:16.084704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:30.553 { 00:11:30.553 "results": [ 00:11:30.553 { 00:11:30.553 "job": "raid_bdev1", 00:11:30.553 "core_mask": "0x1", 00:11:30.553 "workload": "randrw", 00:11:30.553 "percentage": 50, 00:11:30.553 "status": "finished", 00:11:30.553 "queue_depth": 1, 00:11:30.553 "io_size": 131072, 00:11:30.553 "runtime": 1.464855, 00:11:30.553 "iops": 11378.600612347298, 00:11:30.553 "mibps": 1422.3250765434123, 00:11:30.553 "io_failed": 1, 00:11:30.553 "io_timeout": 0, 00:11:30.553 "avg_latency_us": 123.0036237108623, 00:11:30.553 "min_latency_us": 36.77090909090909, 00:11:30.553 "max_latency_us": 1817.1345454545456 00:11:30.553 } 00:11:30.553 ], 00:11:30.553 "core_count": 1 00:11:30.553 } 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67202 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67202 ']' 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67202 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67202 00:11:30.553 killing process with pid 67202 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67202' 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67202 00:11:30.553 20:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67202 00:11:30.553 [2024-10-17 20:08:16.116892] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.812 [2024-10-17 20:08:16.307873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:31.749 20:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VbqRyjKtaR 00:11:31.749 20:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:31.749 20:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:31.749 20:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:11:31.749 20:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:31.749 20:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:31.749 20:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:31.749 20:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:11:31.749 00:11:31.749 real 0m4.632s 00:11:31.749 user 0m5.749s 00:11:31.749 sys 0m0.568s 00:11:31.749 20:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.749 ************************************ 00:11:31.749 END TEST raid_write_error_test 00:11:31.749 ************************************ 00:11:31.749 20:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.749 20:08:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:31.749 20:08:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:31.749 20:08:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:31.749 20:08:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.749 20:08:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.007 ************************************ 00:11:32.007 START TEST raid_state_function_test 00:11:32.007 ************************************ 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67346 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67346' 00:11:32.007 Process raid pid: 67346 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67346 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67346 ']' 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:32.007 20:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.007 [2024-10-17 20:08:17.499649] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:11:32.007 [2024-10-17 20:08:17.500038] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.266 [2024-10-17 20:08:17.660907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.266 [2024-10-17 20:08:17.787212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.524 [2024-10-17 20:08:17.995169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.524 [2024-10-17 20:08:17.995227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.091 [2024-10-17 20:08:18.526199] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.091 [2024-10-17 20:08:18.526268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.091 [2024-10-17 20:08:18.526285] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.091 [2024-10-17 20:08:18.526301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.091 [2024-10-17 20:08:18.526311] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.091 [2024-10-17 20:08:18.526325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.091 "name": "Existed_Raid", 00:11:33.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.091 "strip_size_kb": 0, 00:11:33.091 "state": "configuring", 00:11:33.091 "raid_level": "raid1", 00:11:33.091 "superblock": false, 00:11:33.091 "num_base_bdevs": 3, 00:11:33.091 "num_base_bdevs_discovered": 0, 00:11:33.091 "num_base_bdevs_operational": 3, 00:11:33.091 "base_bdevs_list": [ 00:11:33.091 { 00:11:33.091 "name": "BaseBdev1", 00:11:33.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.091 "is_configured": false, 00:11:33.091 "data_offset": 0, 00:11:33.091 "data_size": 0 00:11:33.091 }, 00:11:33.091 { 00:11:33.091 "name": "BaseBdev2", 00:11:33.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.091 "is_configured": false, 00:11:33.091 "data_offset": 0, 00:11:33.091 "data_size": 0 00:11:33.091 }, 00:11:33.091 { 00:11:33.091 "name": "BaseBdev3", 00:11:33.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.091 "is_configured": false, 00:11:33.091 "data_offset": 0, 00:11:33.091 "data_size": 0 00:11:33.091 } 00:11:33.091 ] 00:11:33.091 }' 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.091 20:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.659 [2024-10-17 20:08:19.038295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.659 [2024-10-17 20:08:19.038342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.659 [2024-10-17 20:08:19.046302] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.659 [2024-10-17 20:08:19.046473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.659 [2024-10-17 20:08:19.046590] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.659 [2024-10-17 20:08:19.046746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.659 [2024-10-17 20:08:19.046903] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.659 [2024-10-17 20:08:19.046964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.659 [2024-10-17 20:08:19.091089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.659 BaseBdev1 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.659 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.659 [ 00:11:33.660 { 00:11:33.660 "name": "BaseBdev1", 00:11:33.660 "aliases": [ 00:11:33.660 "12c52b5c-049d-4fc0-a304-32b1e746167d" 00:11:33.660 ], 00:11:33.660 "product_name": "Malloc disk", 00:11:33.660 "block_size": 512, 00:11:33.660 "num_blocks": 65536, 00:11:33.660 "uuid": "12c52b5c-049d-4fc0-a304-32b1e746167d", 00:11:33.660 "assigned_rate_limits": { 00:11:33.660 "rw_ios_per_sec": 0, 00:11:33.660 "rw_mbytes_per_sec": 0, 00:11:33.660 "r_mbytes_per_sec": 0, 00:11:33.660 "w_mbytes_per_sec": 0 00:11:33.660 }, 00:11:33.660 "claimed": true, 00:11:33.660 "claim_type": "exclusive_write", 00:11:33.660 "zoned": false, 00:11:33.660 "supported_io_types": { 00:11:33.660 "read": true, 00:11:33.660 "write": true, 00:11:33.660 "unmap": true, 00:11:33.660 "flush": true, 00:11:33.660 "reset": true, 00:11:33.660 "nvme_admin": false, 00:11:33.660 "nvme_io": false, 00:11:33.660 "nvme_io_md": false, 00:11:33.660 "write_zeroes": true, 00:11:33.660 "zcopy": true, 00:11:33.660 "get_zone_info": false, 00:11:33.660 "zone_management": false, 00:11:33.660 "zone_append": false, 00:11:33.660 "compare": false, 00:11:33.660 "compare_and_write": false, 00:11:33.660 "abort": true, 00:11:33.660 "seek_hole": false, 00:11:33.660 "seek_data": false, 00:11:33.660 "copy": true, 00:11:33.660 "nvme_iov_md": false 00:11:33.660 }, 00:11:33.660 "memory_domains": [ 00:11:33.660 { 00:11:33.660 "dma_device_id": "system", 00:11:33.660 "dma_device_type": 1 00:11:33.660 }, 00:11:33.660 { 00:11:33.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.660 "dma_device_type": 2 00:11:33.660 } 00:11:33.660 ], 00:11:33.660 "driver_specific": {} 00:11:33.660 } 00:11:33.660 ] 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.660 "name": "Existed_Raid", 00:11:33.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.660 "strip_size_kb": 0, 00:11:33.660 "state": "configuring", 00:11:33.660 "raid_level": "raid1", 00:11:33.660 "superblock": false, 00:11:33.660 "num_base_bdevs": 3, 00:11:33.660 "num_base_bdevs_discovered": 1, 00:11:33.660 "num_base_bdevs_operational": 3, 00:11:33.660 "base_bdevs_list": [ 00:11:33.660 { 00:11:33.660 "name": "BaseBdev1", 00:11:33.660 "uuid": "12c52b5c-049d-4fc0-a304-32b1e746167d", 00:11:33.660 "is_configured": true, 00:11:33.660 "data_offset": 0, 00:11:33.660 "data_size": 65536 00:11:33.660 }, 00:11:33.660 { 00:11:33.660 "name": "BaseBdev2", 00:11:33.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.660 "is_configured": false, 00:11:33.660 "data_offset": 0, 00:11:33.660 "data_size": 0 00:11:33.660 }, 00:11:33.660 { 00:11:33.660 "name": "BaseBdev3", 00:11:33.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.660 "is_configured": false, 00:11:33.660 "data_offset": 0, 00:11:33.660 "data_size": 0 00:11:33.660 } 00:11:33.660 ] 00:11:33.660 }' 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.660 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.227 [2024-10-17 20:08:19.579273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.227 [2024-10-17 20:08:19.579339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.227 [2024-10-17 20:08:19.587305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.227 [2024-10-17 20:08:19.589740] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.227 [2024-10-17 20:08:19.589795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.227 [2024-10-17 20:08:19.589828] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.227 [2024-10-17 20:08:19.589844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.227 "name": "Existed_Raid", 00:11:34.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.227 "strip_size_kb": 0, 00:11:34.227 "state": "configuring", 00:11:34.227 "raid_level": "raid1", 00:11:34.227 "superblock": false, 00:11:34.227 "num_base_bdevs": 3, 00:11:34.227 "num_base_bdevs_discovered": 1, 00:11:34.227 "num_base_bdevs_operational": 3, 00:11:34.227 "base_bdevs_list": [ 00:11:34.227 { 00:11:34.227 "name": "BaseBdev1", 00:11:34.227 "uuid": "12c52b5c-049d-4fc0-a304-32b1e746167d", 00:11:34.227 "is_configured": true, 00:11:34.227 "data_offset": 0, 00:11:34.227 "data_size": 65536 00:11:34.227 }, 00:11:34.227 { 00:11:34.227 "name": "BaseBdev2", 00:11:34.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.227 "is_configured": false, 00:11:34.227 "data_offset": 0, 00:11:34.227 "data_size": 0 00:11:34.227 }, 00:11:34.227 { 00:11:34.227 "name": "BaseBdev3", 00:11:34.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.227 "is_configured": false, 00:11:34.227 "data_offset": 0, 00:11:34.227 "data_size": 0 00:11:34.227 } 00:11:34.227 ] 00:11:34.227 }' 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.227 20:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.486 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:34.486 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.486 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.486 [2024-10-17 20:08:20.130366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.486 BaseBdev2 00:11:34.486 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.486 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:34.486 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:34.486 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:34.486 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:34.486 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:34.486 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:34.486 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:34.486 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.486 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.744 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.744 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:34.744 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.744 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.744 [ 00:11:34.744 { 00:11:34.744 "name": "BaseBdev2", 00:11:34.744 "aliases": [ 00:11:34.744 "85bea412-1f60-4878-8710-6bd5ab1504f7" 00:11:34.744 ], 00:11:34.744 "product_name": "Malloc disk", 00:11:34.744 "block_size": 512, 00:11:34.744 "num_blocks": 65536, 00:11:34.744 "uuid": "85bea412-1f60-4878-8710-6bd5ab1504f7", 00:11:34.744 "assigned_rate_limits": { 00:11:34.744 "rw_ios_per_sec": 0, 00:11:34.744 "rw_mbytes_per_sec": 0, 00:11:34.744 "r_mbytes_per_sec": 0, 00:11:34.744 "w_mbytes_per_sec": 0 00:11:34.744 }, 00:11:34.744 "claimed": true, 00:11:34.744 "claim_type": "exclusive_write", 00:11:34.744 "zoned": false, 00:11:34.744 "supported_io_types": { 00:11:34.744 "read": true, 00:11:34.744 "write": true, 00:11:34.744 "unmap": true, 00:11:34.744 "flush": true, 00:11:34.744 "reset": true, 00:11:34.744 "nvme_admin": false, 00:11:34.744 "nvme_io": false, 00:11:34.744 "nvme_io_md": false, 00:11:34.744 "write_zeroes": true, 00:11:34.744 "zcopy": true, 00:11:34.744 "get_zone_info": false, 00:11:34.744 "zone_management": false, 00:11:34.744 "zone_append": false, 00:11:34.744 "compare": false, 00:11:34.744 "compare_and_write": false, 00:11:34.744 "abort": true, 00:11:34.744 "seek_hole": false, 00:11:34.744 "seek_data": false, 00:11:34.744 "copy": true, 00:11:34.744 "nvme_iov_md": false 00:11:34.744 }, 00:11:34.744 "memory_domains": [ 00:11:34.744 { 00:11:34.744 "dma_device_id": "system", 00:11:34.744 "dma_device_type": 1 00:11:34.744 }, 00:11:34.744 { 00:11:34.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.744 "dma_device_type": 2 00:11:34.744 } 00:11:34.744 ], 00:11:34.744 "driver_specific": {} 00:11:34.744 } 00:11:34.744 ] 00:11:34.744 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.744 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:34.744 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.744 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.744 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:34.744 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.745 "name": "Existed_Raid", 00:11:34.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.745 "strip_size_kb": 0, 00:11:34.745 "state": "configuring", 00:11:34.745 "raid_level": "raid1", 00:11:34.745 "superblock": false, 00:11:34.745 "num_base_bdevs": 3, 00:11:34.745 "num_base_bdevs_discovered": 2, 00:11:34.745 "num_base_bdevs_operational": 3, 00:11:34.745 "base_bdevs_list": [ 00:11:34.745 { 00:11:34.745 "name": "BaseBdev1", 00:11:34.745 "uuid": "12c52b5c-049d-4fc0-a304-32b1e746167d", 00:11:34.745 "is_configured": true, 00:11:34.745 "data_offset": 0, 00:11:34.745 "data_size": 65536 00:11:34.745 }, 00:11:34.745 { 00:11:34.745 "name": "BaseBdev2", 00:11:34.745 "uuid": "85bea412-1f60-4878-8710-6bd5ab1504f7", 00:11:34.745 "is_configured": true, 00:11:34.745 "data_offset": 0, 00:11:34.745 "data_size": 65536 00:11:34.745 }, 00:11:34.745 { 00:11:34.745 "name": "BaseBdev3", 00:11:34.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.745 "is_configured": false, 00:11:34.745 "data_offset": 0, 00:11:34.745 "data_size": 0 00:11:34.745 } 00:11:34.745 ] 00:11:34.745 }' 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.745 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.312 [2024-10-17 20:08:20.755481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.312 [2024-10-17 20:08:20.755849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:35.312 [2024-10-17 20:08:20.755880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:35.312 [2024-10-17 20:08:20.756306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:35.312 [2024-10-17 20:08:20.756570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:35.312 [2024-10-17 20:08:20.756585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:35.312 [2024-10-17 20:08:20.756897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.312 BaseBdev3 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.312 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.312 [ 00:11:35.312 { 00:11:35.312 "name": "BaseBdev3", 00:11:35.312 "aliases": [ 00:11:35.312 "cfe3ad2a-122d-4281-8ee9-cb5c50f9a9c8" 00:11:35.312 ], 00:11:35.312 "product_name": "Malloc disk", 00:11:35.312 "block_size": 512, 00:11:35.312 "num_blocks": 65536, 00:11:35.312 "uuid": "cfe3ad2a-122d-4281-8ee9-cb5c50f9a9c8", 00:11:35.312 "assigned_rate_limits": { 00:11:35.312 "rw_ios_per_sec": 0, 00:11:35.312 "rw_mbytes_per_sec": 0, 00:11:35.312 "r_mbytes_per_sec": 0, 00:11:35.312 "w_mbytes_per_sec": 0 00:11:35.312 }, 00:11:35.312 "claimed": true, 00:11:35.312 "claim_type": "exclusive_write", 00:11:35.312 "zoned": false, 00:11:35.312 "supported_io_types": { 00:11:35.312 "read": true, 00:11:35.312 "write": true, 00:11:35.312 "unmap": true, 00:11:35.312 "flush": true, 00:11:35.312 "reset": true, 00:11:35.312 "nvme_admin": false, 00:11:35.312 "nvme_io": false, 00:11:35.312 "nvme_io_md": false, 00:11:35.312 "write_zeroes": true, 00:11:35.312 "zcopy": true, 00:11:35.312 "get_zone_info": false, 00:11:35.312 "zone_management": false, 00:11:35.312 "zone_append": false, 00:11:35.312 "compare": false, 00:11:35.312 "compare_and_write": false, 00:11:35.312 "abort": true, 00:11:35.312 "seek_hole": false, 00:11:35.312 "seek_data": false, 00:11:35.313 "copy": true, 00:11:35.313 "nvme_iov_md": false 00:11:35.313 }, 00:11:35.313 "memory_domains": [ 00:11:35.313 { 00:11:35.313 "dma_device_id": "system", 00:11:35.313 "dma_device_type": 1 00:11:35.313 }, 00:11:35.313 { 00:11:35.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.313 "dma_device_type": 2 00:11:35.313 } 00:11:35.313 ], 00:11:35.313 "driver_specific": {} 00:11:35.313 } 00:11:35.313 ] 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.313 "name": "Existed_Raid", 00:11:35.313 "uuid": "16c202c7-957f-4c34-9ff5-977a1b6c8d4c", 00:11:35.313 "strip_size_kb": 0, 00:11:35.313 "state": "online", 00:11:35.313 "raid_level": "raid1", 00:11:35.313 "superblock": false, 00:11:35.313 "num_base_bdevs": 3, 00:11:35.313 "num_base_bdevs_discovered": 3, 00:11:35.313 "num_base_bdevs_operational": 3, 00:11:35.313 "base_bdevs_list": [ 00:11:35.313 { 00:11:35.313 "name": "BaseBdev1", 00:11:35.313 "uuid": "12c52b5c-049d-4fc0-a304-32b1e746167d", 00:11:35.313 "is_configured": true, 00:11:35.313 "data_offset": 0, 00:11:35.313 "data_size": 65536 00:11:35.313 }, 00:11:35.313 { 00:11:35.313 "name": "BaseBdev2", 00:11:35.313 "uuid": "85bea412-1f60-4878-8710-6bd5ab1504f7", 00:11:35.313 "is_configured": true, 00:11:35.313 "data_offset": 0, 00:11:35.313 "data_size": 65536 00:11:35.313 }, 00:11:35.313 { 00:11:35.313 "name": "BaseBdev3", 00:11:35.313 "uuid": "cfe3ad2a-122d-4281-8ee9-cb5c50f9a9c8", 00:11:35.313 "is_configured": true, 00:11:35.313 "data_offset": 0, 00:11:35.313 "data_size": 65536 00:11:35.313 } 00:11:35.313 ] 00:11:35.313 }' 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.313 20:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.878 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:35.878 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:35.878 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.878 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.878 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.878 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.878 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:35.878 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.878 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.878 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.878 [2024-10-17 20:08:21.288118] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.878 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.878 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.878 "name": "Existed_Raid", 00:11:35.878 "aliases": [ 00:11:35.878 "16c202c7-957f-4c34-9ff5-977a1b6c8d4c" 00:11:35.878 ], 00:11:35.879 "product_name": "Raid Volume", 00:11:35.879 "block_size": 512, 00:11:35.879 "num_blocks": 65536, 00:11:35.879 "uuid": "16c202c7-957f-4c34-9ff5-977a1b6c8d4c", 00:11:35.879 "assigned_rate_limits": { 00:11:35.879 "rw_ios_per_sec": 0, 00:11:35.879 "rw_mbytes_per_sec": 0, 00:11:35.879 "r_mbytes_per_sec": 0, 00:11:35.879 "w_mbytes_per_sec": 0 00:11:35.879 }, 00:11:35.879 "claimed": false, 00:11:35.879 "zoned": false, 00:11:35.879 "supported_io_types": { 00:11:35.879 "read": true, 00:11:35.879 "write": true, 00:11:35.879 "unmap": false, 00:11:35.879 "flush": false, 00:11:35.879 "reset": true, 00:11:35.879 "nvme_admin": false, 00:11:35.879 "nvme_io": false, 00:11:35.879 "nvme_io_md": false, 00:11:35.879 "write_zeroes": true, 00:11:35.879 "zcopy": false, 00:11:35.879 "get_zone_info": false, 00:11:35.879 "zone_management": false, 00:11:35.879 "zone_append": false, 00:11:35.879 "compare": false, 00:11:35.879 "compare_and_write": false, 00:11:35.879 "abort": false, 00:11:35.879 "seek_hole": false, 00:11:35.879 "seek_data": false, 00:11:35.879 "copy": false, 00:11:35.879 "nvme_iov_md": false 00:11:35.879 }, 00:11:35.879 "memory_domains": [ 00:11:35.879 { 00:11:35.879 "dma_device_id": "system", 00:11:35.879 "dma_device_type": 1 00:11:35.879 }, 00:11:35.879 { 00:11:35.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.879 "dma_device_type": 2 00:11:35.879 }, 00:11:35.879 { 00:11:35.879 "dma_device_id": "system", 00:11:35.879 "dma_device_type": 1 00:11:35.879 }, 00:11:35.879 { 00:11:35.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.879 "dma_device_type": 2 00:11:35.879 }, 00:11:35.879 { 00:11:35.879 "dma_device_id": "system", 00:11:35.879 "dma_device_type": 1 00:11:35.879 }, 00:11:35.879 { 00:11:35.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.879 "dma_device_type": 2 00:11:35.879 } 00:11:35.879 ], 00:11:35.879 "driver_specific": { 00:11:35.879 "raid": { 00:11:35.879 "uuid": "16c202c7-957f-4c34-9ff5-977a1b6c8d4c", 00:11:35.879 "strip_size_kb": 0, 00:11:35.879 "state": "online", 00:11:35.879 "raid_level": "raid1", 00:11:35.879 "superblock": false, 00:11:35.879 "num_base_bdevs": 3, 00:11:35.879 "num_base_bdevs_discovered": 3, 00:11:35.879 "num_base_bdevs_operational": 3, 00:11:35.879 "base_bdevs_list": [ 00:11:35.879 { 00:11:35.879 "name": "BaseBdev1", 00:11:35.879 "uuid": "12c52b5c-049d-4fc0-a304-32b1e746167d", 00:11:35.879 "is_configured": true, 00:11:35.879 "data_offset": 0, 00:11:35.879 "data_size": 65536 00:11:35.879 }, 00:11:35.879 { 00:11:35.879 "name": "BaseBdev2", 00:11:35.879 "uuid": "85bea412-1f60-4878-8710-6bd5ab1504f7", 00:11:35.879 "is_configured": true, 00:11:35.879 "data_offset": 0, 00:11:35.879 "data_size": 65536 00:11:35.879 }, 00:11:35.879 { 00:11:35.879 "name": "BaseBdev3", 00:11:35.879 "uuid": "cfe3ad2a-122d-4281-8ee9-cb5c50f9a9c8", 00:11:35.879 "is_configured": true, 00:11:35.879 "data_offset": 0, 00:11:35.879 "data_size": 65536 00:11:35.879 } 00:11:35.879 ] 00:11:35.879 } 00:11:35.879 } 00:11:35.879 }' 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:35.879 BaseBdev2 00:11:35.879 BaseBdev3' 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.879 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.137 [2024-10-17 20:08:21.599787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:36.137 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.138 "name": "Existed_Raid", 00:11:36.138 "uuid": "16c202c7-957f-4c34-9ff5-977a1b6c8d4c", 00:11:36.138 "strip_size_kb": 0, 00:11:36.138 "state": "online", 00:11:36.138 "raid_level": "raid1", 00:11:36.138 "superblock": false, 00:11:36.138 "num_base_bdevs": 3, 00:11:36.138 "num_base_bdevs_discovered": 2, 00:11:36.138 "num_base_bdevs_operational": 2, 00:11:36.138 "base_bdevs_list": [ 00:11:36.138 { 00:11:36.138 "name": null, 00:11:36.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.138 "is_configured": false, 00:11:36.138 "data_offset": 0, 00:11:36.138 "data_size": 65536 00:11:36.138 }, 00:11:36.138 { 00:11:36.138 "name": "BaseBdev2", 00:11:36.138 "uuid": "85bea412-1f60-4878-8710-6bd5ab1504f7", 00:11:36.138 "is_configured": true, 00:11:36.138 "data_offset": 0, 00:11:36.138 "data_size": 65536 00:11:36.138 }, 00:11:36.138 { 00:11:36.138 "name": "BaseBdev3", 00:11:36.138 "uuid": "cfe3ad2a-122d-4281-8ee9-cb5c50f9a9c8", 00:11:36.138 "is_configured": true, 00:11:36.138 "data_offset": 0, 00:11:36.138 "data_size": 65536 00:11:36.138 } 00:11:36.138 ] 00:11:36.138 }' 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.138 20:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.704 [2024-10-17 20:08:22.239116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.704 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.963 [2024-10-17 20:08:22.375707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:36.963 [2024-10-17 20:08:22.375825] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.963 [2024-10-17 20:08:22.459835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.963 [2024-10-17 20:08:22.460190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.963 [2024-10-17 20:08:22.460227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.963 BaseBdev2 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.963 [ 00:11:36.963 { 00:11:36.963 "name": "BaseBdev2", 00:11:36.963 "aliases": [ 00:11:36.963 "902297a6-604e-4fc6-b769-9ed260e3c679" 00:11:36.963 ], 00:11:36.963 "product_name": "Malloc disk", 00:11:36.963 "block_size": 512, 00:11:36.963 "num_blocks": 65536, 00:11:36.963 "uuid": "902297a6-604e-4fc6-b769-9ed260e3c679", 00:11:36.963 "assigned_rate_limits": { 00:11:36.963 "rw_ios_per_sec": 0, 00:11:36.963 "rw_mbytes_per_sec": 0, 00:11:36.963 "r_mbytes_per_sec": 0, 00:11:36.963 "w_mbytes_per_sec": 0 00:11:36.963 }, 00:11:36.963 "claimed": false, 00:11:36.963 "zoned": false, 00:11:36.963 "supported_io_types": { 00:11:36.963 "read": true, 00:11:36.963 "write": true, 00:11:36.963 "unmap": true, 00:11:36.963 "flush": true, 00:11:36.963 "reset": true, 00:11:36.963 "nvme_admin": false, 00:11:36.963 "nvme_io": false, 00:11:36.963 "nvme_io_md": false, 00:11:36.963 "write_zeroes": true, 00:11:36.963 "zcopy": true, 00:11:36.963 "get_zone_info": false, 00:11:36.963 "zone_management": false, 00:11:36.963 "zone_append": false, 00:11:36.963 "compare": false, 00:11:36.963 "compare_and_write": false, 00:11:36.963 "abort": true, 00:11:36.963 "seek_hole": false, 00:11:36.963 "seek_data": false, 00:11:36.963 "copy": true, 00:11:36.963 "nvme_iov_md": false 00:11:36.963 }, 00:11:36.963 "memory_domains": [ 00:11:36.963 { 00:11:36.963 "dma_device_id": "system", 00:11:36.963 "dma_device_type": 1 00:11:36.963 }, 00:11:36.963 { 00:11:36.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.963 "dma_device_type": 2 00:11:36.963 } 00:11:36.963 ], 00:11:36.963 "driver_specific": {} 00:11:36.963 } 00:11:36.963 ] 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.963 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.223 BaseBdev3 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.223 [ 00:11:37.223 { 00:11:37.223 "name": "BaseBdev3", 00:11:37.223 "aliases": [ 00:11:37.223 "8ec8e719-032f-46ea-9e12-fb50c1eca9a3" 00:11:37.223 ], 00:11:37.223 "product_name": "Malloc disk", 00:11:37.223 "block_size": 512, 00:11:37.223 "num_blocks": 65536, 00:11:37.223 "uuid": "8ec8e719-032f-46ea-9e12-fb50c1eca9a3", 00:11:37.223 "assigned_rate_limits": { 00:11:37.223 "rw_ios_per_sec": 0, 00:11:37.223 "rw_mbytes_per_sec": 0, 00:11:37.223 "r_mbytes_per_sec": 0, 00:11:37.223 "w_mbytes_per_sec": 0 00:11:37.223 }, 00:11:37.223 "claimed": false, 00:11:37.223 "zoned": false, 00:11:37.223 "supported_io_types": { 00:11:37.223 "read": true, 00:11:37.223 "write": true, 00:11:37.223 "unmap": true, 00:11:37.223 "flush": true, 00:11:37.223 "reset": true, 00:11:37.223 "nvme_admin": false, 00:11:37.223 "nvme_io": false, 00:11:37.223 "nvme_io_md": false, 00:11:37.223 "write_zeroes": true, 00:11:37.223 "zcopy": true, 00:11:37.223 "get_zone_info": false, 00:11:37.223 "zone_management": false, 00:11:37.223 "zone_append": false, 00:11:37.223 "compare": false, 00:11:37.223 "compare_and_write": false, 00:11:37.223 "abort": true, 00:11:37.223 "seek_hole": false, 00:11:37.223 "seek_data": false, 00:11:37.223 "copy": true, 00:11:37.223 "nvme_iov_md": false 00:11:37.223 }, 00:11:37.223 "memory_domains": [ 00:11:37.223 { 00:11:37.223 "dma_device_id": "system", 00:11:37.223 "dma_device_type": 1 00:11:37.223 }, 00:11:37.223 { 00:11:37.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.223 "dma_device_type": 2 00:11:37.223 } 00:11:37.223 ], 00:11:37.223 "driver_specific": {} 00:11:37.223 } 00:11:37.223 ] 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.223 [2024-10-17 20:08:22.667142] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.223 [2024-10-17 20:08:22.667354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.223 [2024-10-17 20:08:22.667408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.223 [2024-10-17 20:08:22.669838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.223 "name": "Existed_Raid", 00:11:37.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.223 "strip_size_kb": 0, 00:11:37.223 "state": "configuring", 00:11:37.223 "raid_level": "raid1", 00:11:37.223 "superblock": false, 00:11:37.223 "num_base_bdevs": 3, 00:11:37.223 "num_base_bdevs_discovered": 2, 00:11:37.223 "num_base_bdevs_operational": 3, 00:11:37.223 "base_bdevs_list": [ 00:11:37.223 { 00:11:37.223 "name": "BaseBdev1", 00:11:37.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.223 "is_configured": false, 00:11:37.223 "data_offset": 0, 00:11:37.223 "data_size": 0 00:11:37.223 }, 00:11:37.223 { 00:11:37.223 "name": "BaseBdev2", 00:11:37.223 "uuid": "902297a6-604e-4fc6-b769-9ed260e3c679", 00:11:37.223 "is_configured": true, 00:11:37.223 "data_offset": 0, 00:11:37.223 "data_size": 65536 00:11:37.223 }, 00:11:37.223 { 00:11:37.223 "name": "BaseBdev3", 00:11:37.223 "uuid": "8ec8e719-032f-46ea-9e12-fb50c1eca9a3", 00:11:37.223 "is_configured": true, 00:11:37.223 "data_offset": 0, 00:11:37.223 "data_size": 65536 00:11:37.223 } 00:11:37.223 ] 00:11:37.223 }' 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.223 20:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.790 [2024-10-17 20:08:23.195315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.790 "name": "Existed_Raid", 00:11:37.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.790 "strip_size_kb": 0, 00:11:37.790 "state": "configuring", 00:11:37.790 "raid_level": "raid1", 00:11:37.790 "superblock": false, 00:11:37.790 "num_base_bdevs": 3, 00:11:37.790 "num_base_bdevs_discovered": 1, 00:11:37.790 "num_base_bdevs_operational": 3, 00:11:37.790 "base_bdevs_list": [ 00:11:37.790 { 00:11:37.790 "name": "BaseBdev1", 00:11:37.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.790 "is_configured": false, 00:11:37.790 "data_offset": 0, 00:11:37.790 "data_size": 0 00:11:37.790 }, 00:11:37.790 { 00:11:37.790 "name": null, 00:11:37.790 "uuid": "902297a6-604e-4fc6-b769-9ed260e3c679", 00:11:37.790 "is_configured": false, 00:11:37.790 "data_offset": 0, 00:11:37.790 "data_size": 65536 00:11:37.790 }, 00:11:37.790 { 00:11:37.790 "name": "BaseBdev3", 00:11:37.790 "uuid": "8ec8e719-032f-46ea-9e12-fb50c1eca9a3", 00:11:37.790 "is_configured": true, 00:11:37.790 "data_offset": 0, 00:11:37.790 "data_size": 65536 00:11:37.790 } 00:11:37.790 ] 00:11:37.790 }' 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.790 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.357 [2024-10-17 20:08:23.798209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.357 BaseBdev1 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.357 [ 00:11:38.357 { 00:11:38.357 "name": "BaseBdev1", 00:11:38.357 "aliases": [ 00:11:38.357 "9f71eedb-5a4e-4576-a802-51f0c1d46e64" 00:11:38.357 ], 00:11:38.357 "product_name": "Malloc disk", 00:11:38.357 "block_size": 512, 00:11:38.357 "num_blocks": 65536, 00:11:38.357 "uuid": "9f71eedb-5a4e-4576-a802-51f0c1d46e64", 00:11:38.357 "assigned_rate_limits": { 00:11:38.357 "rw_ios_per_sec": 0, 00:11:38.357 "rw_mbytes_per_sec": 0, 00:11:38.357 "r_mbytes_per_sec": 0, 00:11:38.357 "w_mbytes_per_sec": 0 00:11:38.357 }, 00:11:38.357 "claimed": true, 00:11:38.357 "claim_type": "exclusive_write", 00:11:38.357 "zoned": false, 00:11:38.357 "supported_io_types": { 00:11:38.357 "read": true, 00:11:38.357 "write": true, 00:11:38.357 "unmap": true, 00:11:38.357 "flush": true, 00:11:38.357 "reset": true, 00:11:38.357 "nvme_admin": false, 00:11:38.357 "nvme_io": false, 00:11:38.357 "nvme_io_md": false, 00:11:38.357 "write_zeroes": true, 00:11:38.357 "zcopy": true, 00:11:38.357 "get_zone_info": false, 00:11:38.357 "zone_management": false, 00:11:38.357 "zone_append": false, 00:11:38.357 "compare": false, 00:11:38.357 "compare_and_write": false, 00:11:38.357 "abort": true, 00:11:38.357 "seek_hole": false, 00:11:38.357 "seek_data": false, 00:11:38.357 "copy": true, 00:11:38.357 "nvme_iov_md": false 00:11:38.357 }, 00:11:38.357 "memory_domains": [ 00:11:38.357 { 00:11:38.357 "dma_device_id": "system", 00:11:38.357 "dma_device_type": 1 00:11:38.357 }, 00:11:38.357 { 00:11:38.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.357 "dma_device_type": 2 00:11:38.357 } 00:11:38.357 ], 00:11:38.357 "driver_specific": {} 00:11:38.357 } 00:11:38.357 ] 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.357 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.358 "name": "Existed_Raid", 00:11:38.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.358 "strip_size_kb": 0, 00:11:38.358 "state": "configuring", 00:11:38.358 "raid_level": "raid1", 00:11:38.358 "superblock": false, 00:11:38.358 "num_base_bdevs": 3, 00:11:38.358 "num_base_bdevs_discovered": 2, 00:11:38.358 "num_base_bdevs_operational": 3, 00:11:38.358 "base_bdevs_list": [ 00:11:38.358 { 00:11:38.358 "name": "BaseBdev1", 00:11:38.358 "uuid": "9f71eedb-5a4e-4576-a802-51f0c1d46e64", 00:11:38.358 "is_configured": true, 00:11:38.358 "data_offset": 0, 00:11:38.358 "data_size": 65536 00:11:38.358 }, 00:11:38.358 { 00:11:38.358 "name": null, 00:11:38.358 "uuid": "902297a6-604e-4fc6-b769-9ed260e3c679", 00:11:38.358 "is_configured": false, 00:11:38.358 "data_offset": 0, 00:11:38.358 "data_size": 65536 00:11:38.358 }, 00:11:38.358 { 00:11:38.358 "name": "BaseBdev3", 00:11:38.358 "uuid": "8ec8e719-032f-46ea-9e12-fb50c1eca9a3", 00:11:38.358 "is_configured": true, 00:11:38.358 "data_offset": 0, 00:11:38.358 "data_size": 65536 00:11:38.358 } 00:11:38.358 ] 00:11:38.358 }' 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.358 20:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.924 [2024-10-17 20:08:24.418389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.924 "name": "Existed_Raid", 00:11:38.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.924 "strip_size_kb": 0, 00:11:38.924 "state": "configuring", 00:11:38.924 "raid_level": "raid1", 00:11:38.924 "superblock": false, 00:11:38.924 "num_base_bdevs": 3, 00:11:38.924 "num_base_bdevs_discovered": 1, 00:11:38.924 "num_base_bdevs_operational": 3, 00:11:38.924 "base_bdevs_list": [ 00:11:38.924 { 00:11:38.924 "name": "BaseBdev1", 00:11:38.924 "uuid": "9f71eedb-5a4e-4576-a802-51f0c1d46e64", 00:11:38.924 "is_configured": true, 00:11:38.924 "data_offset": 0, 00:11:38.924 "data_size": 65536 00:11:38.924 }, 00:11:38.924 { 00:11:38.924 "name": null, 00:11:38.924 "uuid": "902297a6-604e-4fc6-b769-9ed260e3c679", 00:11:38.924 "is_configured": false, 00:11:38.924 "data_offset": 0, 00:11:38.924 "data_size": 65536 00:11:38.924 }, 00:11:38.924 { 00:11:38.924 "name": null, 00:11:38.924 "uuid": "8ec8e719-032f-46ea-9e12-fb50c1eca9a3", 00:11:38.924 "is_configured": false, 00:11:38.924 "data_offset": 0, 00:11:38.924 "data_size": 65536 00:11:38.924 } 00:11:38.924 ] 00:11:38.924 }' 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.924 20:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.491 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.491 20:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.491 20:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.491 20:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.491 20:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.491 [2024-10-17 20:08:25.006706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.491 "name": "Existed_Raid", 00:11:39.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.491 "strip_size_kb": 0, 00:11:39.491 "state": "configuring", 00:11:39.491 "raid_level": "raid1", 00:11:39.491 "superblock": false, 00:11:39.491 "num_base_bdevs": 3, 00:11:39.491 "num_base_bdevs_discovered": 2, 00:11:39.491 "num_base_bdevs_operational": 3, 00:11:39.491 "base_bdevs_list": [ 00:11:39.491 { 00:11:39.491 "name": "BaseBdev1", 00:11:39.491 "uuid": "9f71eedb-5a4e-4576-a802-51f0c1d46e64", 00:11:39.491 "is_configured": true, 00:11:39.491 "data_offset": 0, 00:11:39.491 "data_size": 65536 00:11:39.491 }, 00:11:39.491 { 00:11:39.491 "name": null, 00:11:39.491 "uuid": "902297a6-604e-4fc6-b769-9ed260e3c679", 00:11:39.491 "is_configured": false, 00:11:39.491 "data_offset": 0, 00:11:39.491 "data_size": 65536 00:11:39.491 }, 00:11:39.491 { 00:11:39.491 "name": "BaseBdev3", 00:11:39.491 "uuid": "8ec8e719-032f-46ea-9e12-fb50c1eca9a3", 00:11:39.491 "is_configured": true, 00:11:39.491 "data_offset": 0, 00:11:39.491 "data_size": 65536 00:11:39.491 } 00:11:39.491 ] 00:11:39.491 }' 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.491 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.135 [2024-10-17 20:08:25.618905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.135 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.135 "name": "Existed_Raid", 00:11:40.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.135 "strip_size_kb": 0, 00:11:40.135 "state": "configuring", 00:11:40.135 "raid_level": "raid1", 00:11:40.135 "superblock": false, 00:11:40.135 "num_base_bdevs": 3, 00:11:40.135 "num_base_bdevs_discovered": 1, 00:11:40.135 "num_base_bdevs_operational": 3, 00:11:40.135 "base_bdevs_list": [ 00:11:40.135 { 00:11:40.135 "name": null, 00:11:40.135 "uuid": "9f71eedb-5a4e-4576-a802-51f0c1d46e64", 00:11:40.135 "is_configured": false, 00:11:40.135 "data_offset": 0, 00:11:40.135 "data_size": 65536 00:11:40.135 }, 00:11:40.135 { 00:11:40.135 "name": null, 00:11:40.135 "uuid": "902297a6-604e-4fc6-b769-9ed260e3c679", 00:11:40.135 "is_configured": false, 00:11:40.135 "data_offset": 0, 00:11:40.135 "data_size": 65536 00:11:40.135 }, 00:11:40.135 { 00:11:40.135 "name": "BaseBdev3", 00:11:40.135 "uuid": "8ec8e719-032f-46ea-9e12-fb50c1eca9a3", 00:11:40.135 "is_configured": true, 00:11:40.135 "data_offset": 0, 00:11:40.135 "data_size": 65536 00:11:40.135 } 00:11:40.135 ] 00:11:40.135 }' 00:11:40.136 20:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.136 20:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.702 [2024-10-17 20:08:26.314351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.702 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.960 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.961 "name": "Existed_Raid", 00:11:40.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.961 "strip_size_kb": 0, 00:11:40.961 "state": "configuring", 00:11:40.961 "raid_level": "raid1", 00:11:40.961 "superblock": false, 00:11:40.961 "num_base_bdevs": 3, 00:11:40.961 "num_base_bdevs_discovered": 2, 00:11:40.961 "num_base_bdevs_operational": 3, 00:11:40.961 "base_bdevs_list": [ 00:11:40.961 { 00:11:40.961 "name": null, 00:11:40.961 "uuid": "9f71eedb-5a4e-4576-a802-51f0c1d46e64", 00:11:40.961 "is_configured": false, 00:11:40.961 "data_offset": 0, 00:11:40.961 "data_size": 65536 00:11:40.961 }, 00:11:40.961 { 00:11:40.961 "name": "BaseBdev2", 00:11:40.961 "uuid": "902297a6-604e-4fc6-b769-9ed260e3c679", 00:11:40.961 "is_configured": true, 00:11:40.961 "data_offset": 0, 00:11:40.961 "data_size": 65536 00:11:40.961 }, 00:11:40.961 { 00:11:40.961 "name": "BaseBdev3", 00:11:40.961 "uuid": "8ec8e719-032f-46ea-9e12-fb50c1eca9a3", 00:11:40.961 "is_configured": true, 00:11:40.961 "data_offset": 0, 00:11:40.961 "data_size": 65536 00:11:40.961 } 00:11:40.961 ] 00:11:40.961 }' 00:11:40.961 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.961 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.220 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.220 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.220 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.220 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.220 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.220 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9f71eedb-5a4e-4576-a802-51f0c1d46e64 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.479 [2024-10-17 20:08:26.965959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:41.479 [2024-10-17 20:08:26.966063] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:41.479 [2024-10-17 20:08:26.966077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:41.479 [2024-10-17 20:08:26.966422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:41.479 [2024-10-17 20:08:26.966680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:41.479 [2024-10-17 20:08:26.966709] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:41.479 [2024-10-17 20:08:26.967035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.479 NewBaseBdev 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.479 20:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.479 [ 00:11:41.479 { 00:11:41.479 "name": "NewBaseBdev", 00:11:41.479 "aliases": [ 00:11:41.479 "9f71eedb-5a4e-4576-a802-51f0c1d46e64" 00:11:41.479 ], 00:11:41.479 "product_name": "Malloc disk", 00:11:41.479 "block_size": 512, 00:11:41.479 "num_blocks": 65536, 00:11:41.479 "uuid": "9f71eedb-5a4e-4576-a802-51f0c1d46e64", 00:11:41.479 "assigned_rate_limits": { 00:11:41.479 "rw_ios_per_sec": 0, 00:11:41.479 "rw_mbytes_per_sec": 0, 00:11:41.479 "r_mbytes_per_sec": 0, 00:11:41.479 "w_mbytes_per_sec": 0 00:11:41.479 }, 00:11:41.479 "claimed": true, 00:11:41.479 "claim_type": "exclusive_write", 00:11:41.479 "zoned": false, 00:11:41.479 "supported_io_types": { 00:11:41.479 "read": true, 00:11:41.479 "write": true, 00:11:41.479 "unmap": true, 00:11:41.479 "flush": true, 00:11:41.479 "reset": true, 00:11:41.479 "nvme_admin": false, 00:11:41.479 "nvme_io": false, 00:11:41.479 "nvme_io_md": false, 00:11:41.479 "write_zeroes": true, 00:11:41.479 "zcopy": true, 00:11:41.479 "get_zone_info": false, 00:11:41.479 "zone_management": false, 00:11:41.479 "zone_append": false, 00:11:41.479 "compare": false, 00:11:41.479 "compare_and_write": false, 00:11:41.479 "abort": true, 00:11:41.479 "seek_hole": false, 00:11:41.479 "seek_data": false, 00:11:41.479 "copy": true, 00:11:41.479 "nvme_iov_md": false 00:11:41.479 }, 00:11:41.479 "memory_domains": [ 00:11:41.479 { 00:11:41.479 "dma_device_id": "system", 00:11:41.479 "dma_device_type": 1 00:11:41.479 }, 00:11:41.479 { 00:11:41.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.479 "dma_device_type": 2 00:11:41.479 } 00:11:41.479 ], 00:11:41.479 "driver_specific": {} 00:11:41.479 } 00:11:41.479 ] 00:11:41.479 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.480 "name": "Existed_Raid", 00:11:41.480 "uuid": "406a6ac7-f2e2-4bbb-bbda-97f498b0ce68", 00:11:41.480 "strip_size_kb": 0, 00:11:41.480 "state": "online", 00:11:41.480 "raid_level": "raid1", 00:11:41.480 "superblock": false, 00:11:41.480 "num_base_bdevs": 3, 00:11:41.480 "num_base_bdevs_discovered": 3, 00:11:41.480 "num_base_bdevs_operational": 3, 00:11:41.480 "base_bdevs_list": [ 00:11:41.480 { 00:11:41.480 "name": "NewBaseBdev", 00:11:41.480 "uuid": "9f71eedb-5a4e-4576-a802-51f0c1d46e64", 00:11:41.480 "is_configured": true, 00:11:41.480 "data_offset": 0, 00:11:41.480 "data_size": 65536 00:11:41.480 }, 00:11:41.480 { 00:11:41.480 "name": "BaseBdev2", 00:11:41.480 "uuid": "902297a6-604e-4fc6-b769-9ed260e3c679", 00:11:41.480 "is_configured": true, 00:11:41.480 "data_offset": 0, 00:11:41.480 "data_size": 65536 00:11:41.480 }, 00:11:41.480 { 00:11:41.480 "name": "BaseBdev3", 00:11:41.480 "uuid": "8ec8e719-032f-46ea-9e12-fb50c1eca9a3", 00:11:41.480 "is_configured": true, 00:11:41.480 "data_offset": 0, 00:11:41.480 "data_size": 65536 00:11:41.480 } 00:11:41.480 ] 00:11:41.480 }' 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.480 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.049 [2024-10-17 20:08:27.550749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.049 "name": "Existed_Raid", 00:11:42.049 "aliases": [ 00:11:42.049 "406a6ac7-f2e2-4bbb-bbda-97f498b0ce68" 00:11:42.049 ], 00:11:42.049 "product_name": "Raid Volume", 00:11:42.049 "block_size": 512, 00:11:42.049 "num_blocks": 65536, 00:11:42.049 "uuid": "406a6ac7-f2e2-4bbb-bbda-97f498b0ce68", 00:11:42.049 "assigned_rate_limits": { 00:11:42.049 "rw_ios_per_sec": 0, 00:11:42.049 "rw_mbytes_per_sec": 0, 00:11:42.049 "r_mbytes_per_sec": 0, 00:11:42.049 "w_mbytes_per_sec": 0 00:11:42.049 }, 00:11:42.049 "claimed": false, 00:11:42.049 "zoned": false, 00:11:42.049 "supported_io_types": { 00:11:42.049 "read": true, 00:11:42.049 "write": true, 00:11:42.049 "unmap": false, 00:11:42.049 "flush": false, 00:11:42.049 "reset": true, 00:11:42.049 "nvme_admin": false, 00:11:42.049 "nvme_io": false, 00:11:42.049 "nvme_io_md": false, 00:11:42.049 "write_zeroes": true, 00:11:42.049 "zcopy": false, 00:11:42.049 "get_zone_info": false, 00:11:42.049 "zone_management": false, 00:11:42.049 "zone_append": false, 00:11:42.049 "compare": false, 00:11:42.049 "compare_and_write": false, 00:11:42.049 "abort": false, 00:11:42.049 "seek_hole": false, 00:11:42.049 "seek_data": false, 00:11:42.049 "copy": false, 00:11:42.049 "nvme_iov_md": false 00:11:42.049 }, 00:11:42.049 "memory_domains": [ 00:11:42.049 { 00:11:42.049 "dma_device_id": "system", 00:11:42.049 "dma_device_type": 1 00:11:42.049 }, 00:11:42.049 { 00:11:42.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.049 "dma_device_type": 2 00:11:42.049 }, 00:11:42.049 { 00:11:42.049 "dma_device_id": "system", 00:11:42.049 "dma_device_type": 1 00:11:42.049 }, 00:11:42.049 { 00:11:42.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.049 "dma_device_type": 2 00:11:42.049 }, 00:11:42.049 { 00:11:42.049 "dma_device_id": "system", 00:11:42.049 "dma_device_type": 1 00:11:42.049 }, 00:11:42.049 { 00:11:42.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.049 "dma_device_type": 2 00:11:42.049 } 00:11:42.049 ], 00:11:42.049 "driver_specific": { 00:11:42.049 "raid": { 00:11:42.049 "uuid": "406a6ac7-f2e2-4bbb-bbda-97f498b0ce68", 00:11:42.049 "strip_size_kb": 0, 00:11:42.049 "state": "online", 00:11:42.049 "raid_level": "raid1", 00:11:42.049 "superblock": false, 00:11:42.049 "num_base_bdevs": 3, 00:11:42.049 "num_base_bdevs_discovered": 3, 00:11:42.049 "num_base_bdevs_operational": 3, 00:11:42.049 "base_bdevs_list": [ 00:11:42.049 { 00:11:42.049 "name": "NewBaseBdev", 00:11:42.049 "uuid": "9f71eedb-5a4e-4576-a802-51f0c1d46e64", 00:11:42.049 "is_configured": true, 00:11:42.049 "data_offset": 0, 00:11:42.049 "data_size": 65536 00:11:42.049 }, 00:11:42.049 { 00:11:42.049 "name": "BaseBdev2", 00:11:42.049 "uuid": "902297a6-604e-4fc6-b769-9ed260e3c679", 00:11:42.049 "is_configured": true, 00:11:42.049 "data_offset": 0, 00:11:42.049 "data_size": 65536 00:11:42.049 }, 00:11:42.049 { 00:11:42.049 "name": "BaseBdev3", 00:11:42.049 "uuid": "8ec8e719-032f-46ea-9e12-fb50c1eca9a3", 00:11:42.049 "is_configured": true, 00:11:42.049 "data_offset": 0, 00:11:42.049 "data_size": 65536 00:11:42.049 } 00:11:42.049 ] 00:11:42.049 } 00:11:42.049 } 00:11:42.049 }' 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:42.049 BaseBdev2 00:11:42.049 BaseBdev3' 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.049 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.308 [2024-10-17 20:08:27.890512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.308 [2024-10-17 20:08:27.890552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.308 [2024-10-17 20:08:27.890651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.308 [2024-10-17 20:08:27.890995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.308 [2024-10-17 20:08:27.891045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67346 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67346 ']' 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67346 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67346 00:11:42.308 killing process with pid 67346 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67346' 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67346 00:11:42.308 [2024-10-17 20:08:27.932737] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.308 20:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67346 00:11:42.566 [2024-10-17 20:08:28.203105] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.941 ************************************ 00:11:43.941 END TEST raid_state_function_test 00:11:43.941 ************************************ 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:43.941 00:11:43.941 real 0m11.882s 00:11:43.941 user 0m19.748s 00:11:43.941 sys 0m1.571s 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.941 20:08:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:43.941 20:08:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:43.941 20:08:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.941 20:08:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.941 ************************************ 00:11:43.941 START TEST raid_state_function_test_sb 00:11:43.941 ************************************ 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:43.941 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:43.941 Process raid pid: 67978 00:11:43.942 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:43.942 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67978 00:11:43.942 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:43.942 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67978' 00:11:43.942 20:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67978 00:11:43.942 20:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 67978 ']' 00:11:43.942 20:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.942 20:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.942 20:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.942 20:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.942 20:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.942 [2024-10-17 20:08:29.464732] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:11:43.942 [2024-10-17 20:08:29.465214] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.200 [2024-10-17 20:08:29.647263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.200 [2024-10-17 20:08:29.814215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.458 [2024-10-17 20:08:30.038334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.458 [2024-10-17 20:08:30.038407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.025 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:45.025 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:45.025 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.026 [2024-10-17 20:08:30.414611] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.026 [2024-10-17 20:08:30.414675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.026 [2024-10-17 20:08:30.414693] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.026 [2024-10-17 20:08:30.414709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.026 [2024-10-17 20:08:30.414720] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.026 [2024-10-17 20:08:30.414734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.026 "name": "Existed_Raid", 00:11:45.026 "uuid": "760afd72-10bd-4ccb-8b46-248ec9d7cb1c", 00:11:45.026 "strip_size_kb": 0, 00:11:45.026 "state": "configuring", 00:11:45.026 "raid_level": "raid1", 00:11:45.026 "superblock": true, 00:11:45.026 "num_base_bdevs": 3, 00:11:45.026 "num_base_bdevs_discovered": 0, 00:11:45.026 "num_base_bdevs_operational": 3, 00:11:45.026 "base_bdevs_list": [ 00:11:45.026 { 00:11:45.026 "name": "BaseBdev1", 00:11:45.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.026 "is_configured": false, 00:11:45.026 "data_offset": 0, 00:11:45.026 "data_size": 0 00:11:45.026 }, 00:11:45.026 { 00:11:45.026 "name": "BaseBdev2", 00:11:45.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.026 "is_configured": false, 00:11:45.026 "data_offset": 0, 00:11:45.026 "data_size": 0 00:11:45.026 }, 00:11:45.026 { 00:11:45.026 "name": "BaseBdev3", 00:11:45.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.026 "is_configured": false, 00:11:45.026 "data_offset": 0, 00:11:45.026 "data_size": 0 00:11:45.026 } 00:11:45.026 ] 00:11:45.026 }' 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.026 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.284 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.284 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.284 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.284 [2024-10-17 20:08:30.934675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.284 [2024-10-17 20:08:30.934721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.542 [2024-10-17 20:08:30.942657] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.542 [2024-10-17 20:08:30.942740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.542 [2024-10-17 20:08:30.942755] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.542 [2024-10-17 20:08:30.942772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.542 [2024-10-17 20:08:30.942781] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.542 [2024-10-17 20:08:30.942796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.542 [2024-10-17 20:08:30.987710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.542 BaseBdev1 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.542 20:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.542 [ 00:11:45.542 { 00:11:45.542 "name": "BaseBdev1", 00:11:45.542 "aliases": [ 00:11:45.542 "64158338-4212-44c3-89be-3a3d16605a05" 00:11:45.542 ], 00:11:45.542 "product_name": "Malloc disk", 00:11:45.542 "block_size": 512, 00:11:45.542 "num_blocks": 65536, 00:11:45.542 "uuid": "64158338-4212-44c3-89be-3a3d16605a05", 00:11:45.542 "assigned_rate_limits": { 00:11:45.542 "rw_ios_per_sec": 0, 00:11:45.542 "rw_mbytes_per_sec": 0, 00:11:45.542 "r_mbytes_per_sec": 0, 00:11:45.542 "w_mbytes_per_sec": 0 00:11:45.542 }, 00:11:45.542 "claimed": true, 00:11:45.542 "claim_type": "exclusive_write", 00:11:45.542 "zoned": false, 00:11:45.542 "supported_io_types": { 00:11:45.542 "read": true, 00:11:45.542 "write": true, 00:11:45.542 "unmap": true, 00:11:45.542 "flush": true, 00:11:45.542 "reset": true, 00:11:45.542 "nvme_admin": false, 00:11:45.542 "nvme_io": false, 00:11:45.542 "nvme_io_md": false, 00:11:45.542 "write_zeroes": true, 00:11:45.542 "zcopy": true, 00:11:45.542 "get_zone_info": false, 00:11:45.542 "zone_management": false, 00:11:45.542 "zone_append": false, 00:11:45.542 "compare": false, 00:11:45.542 "compare_and_write": false, 00:11:45.542 "abort": true, 00:11:45.542 "seek_hole": false, 00:11:45.542 "seek_data": false, 00:11:45.542 "copy": true, 00:11:45.542 "nvme_iov_md": false 00:11:45.542 }, 00:11:45.542 "memory_domains": [ 00:11:45.542 { 00:11:45.542 "dma_device_id": "system", 00:11:45.542 "dma_device_type": 1 00:11:45.542 }, 00:11:45.542 { 00:11:45.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.542 "dma_device_type": 2 00:11:45.542 } 00:11:45.542 ], 00:11:45.543 "driver_specific": {} 00:11:45.543 } 00:11:45.543 ] 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.543 "name": "Existed_Raid", 00:11:45.543 "uuid": "4818c9b1-fcd3-4954-9676-2b27ae3c2e24", 00:11:45.543 "strip_size_kb": 0, 00:11:45.543 "state": "configuring", 00:11:45.543 "raid_level": "raid1", 00:11:45.543 "superblock": true, 00:11:45.543 "num_base_bdevs": 3, 00:11:45.543 "num_base_bdevs_discovered": 1, 00:11:45.543 "num_base_bdevs_operational": 3, 00:11:45.543 "base_bdevs_list": [ 00:11:45.543 { 00:11:45.543 "name": "BaseBdev1", 00:11:45.543 "uuid": "64158338-4212-44c3-89be-3a3d16605a05", 00:11:45.543 "is_configured": true, 00:11:45.543 "data_offset": 2048, 00:11:45.543 "data_size": 63488 00:11:45.543 }, 00:11:45.543 { 00:11:45.543 "name": "BaseBdev2", 00:11:45.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.543 "is_configured": false, 00:11:45.543 "data_offset": 0, 00:11:45.543 "data_size": 0 00:11:45.543 }, 00:11:45.543 { 00:11:45.543 "name": "BaseBdev3", 00:11:45.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.543 "is_configured": false, 00:11:45.543 "data_offset": 0, 00:11:45.543 "data_size": 0 00:11:45.543 } 00:11:45.543 ] 00:11:45.543 }' 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.543 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.116 [2024-10-17 20:08:31.559925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:46.116 [2024-10-17 20:08:31.559990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.116 [2024-10-17 20:08:31.567964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.116 [2024-10-17 20:08:31.570652] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:46.116 [2024-10-17 20:08:31.570711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:46.116 [2024-10-17 20:08:31.570743] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:46.116 [2024-10-17 20:08:31.570758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.116 "name": "Existed_Raid", 00:11:46.116 "uuid": "8975e78a-31df-4fa5-ab26-774e67cea006", 00:11:46.116 "strip_size_kb": 0, 00:11:46.116 "state": "configuring", 00:11:46.116 "raid_level": "raid1", 00:11:46.116 "superblock": true, 00:11:46.116 "num_base_bdevs": 3, 00:11:46.116 "num_base_bdevs_discovered": 1, 00:11:46.116 "num_base_bdevs_operational": 3, 00:11:46.116 "base_bdevs_list": [ 00:11:46.116 { 00:11:46.116 "name": "BaseBdev1", 00:11:46.116 "uuid": "64158338-4212-44c3-89be-3a3d16605a05", 00:11:46.116 "is_configured": true, 00:11:46.116 "data_offset": 2048, 00:11:46.116 "data_size": 63488 00:11:46.116 }, 00:11:46.116 { 00:11:46.116 "name": "BaseBdev2", 00:11:46.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.116 "is_configured": false, 00:11:46.116 "data_offset": 0, 00:11:46.116 "data_size": 0 00:11:46.116 }, 00:11:46.116 { 00:11:46.116 "name": "BaseBdev3", 00:11:46.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.116 "is_configured": false, 00:11:46.116 "data_offset": 0, 00:11:46.116 "data_size": 0 00:11:46.116 } 00:11:46.116 ] 00:11:46.116 }' 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.116 20:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.684 [2024-10-17 20:08:32.167111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.684 BaseBdev2 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.684 [ 00:11:46.684 { 00:11:46.684 "name": "BaseBdev2", 00:11:46.684 "aliases": [ 00:11:46.684 "ab56b5c9-f553-4ef3-9377-b6ca271110c1" 00:11:46.684 ], 00:11:46.684 "product_name": "Malloc disk", 00:11:46.684 "block_size": 512, 00:11:46.684 "num_blocks": 65536, 00:11:46.684 "uuid": "ab56b5c9-f553-4ef3-9377-b6ca271110c1", 00:11:46.684 "assigned_rate_limits": { 00:11:46.684 "rw_ios_per_sec": 0, 00:11:46.684 "rw_mbytes_per_sec": 0, 00:11:46.684 "r_mbytes_per_sec": 0, 00:11:46.684 "w_mbytes_per_sec": 0 00:11:46.684 }, 00:11:46.684 "claimed": true, 00:11:46.684 "claim_type": "exclusive_write", 00:11:46.684 "zoned": false, 00:11:46.684 "supported_io_types": { 00:11:46.684 "read": true, 00:11:46.684 "write": true, 00:11:46.684 "unmap": true, 00:11:46.684 "flush": true, 00:11:46.684 "reset": true, 00:11:46.684 "nvme_admin": false, 00:11:46.684 "nvme_io": false, 00:11:46.684 "nvme_io_md": false, 00:11:46.684 "write_zeroes": true, 00:11:46.684 "zcopy": true, 00:11:46.684 "get_zone_info": false, 00:11:46.684 "zone_management": false, 00:11:46.684 "zone_append": false, 00:11:46.684 "compare": false, 00:11:46.684 "compare_and_write": false, 00:11:46.684 "abort": true, 00:11:46.684 "seek_hole": false, 00:11:46.684 "seek_data": false, 00:11:46.684 "copy": true, 00:11:46.684 "nvme_iov_md": false 00:11:46.684 }, 00:11:46.684 "memory_domains": [ 00:11:46.684 { 00:11:46.684 "dma_device_id": "system", 00:11:46.684 "dma_device_type": 1 00:11:46.684 }, 00:11:46.684 { 00:11:46.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.684 "dma_device_type": 2 00:11:46.684 } 00:11:46.684 ], 00:11:46.684 "driver_specific": {} 00:11:46.684 } 00:11:46.684 ] 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.684 "name": "Existed_Raid", 00:11:46.684 "uuid": "8975e78a-31df-4fa5-ab26-774e67cea006", 00:11:46.684 "strip_size_kb": 0, 00:11:46.684 "state": "configuring", 00:11:46.684 "raid_level": "raid1", 00:11:46.684 "superblock": true, 00:11:46.684 "num_base_bdevs": 3, 00:11:46.684 "num_base_bdevs_discovered": 2, 00:11:46.684 "num_base_bdevs_operational": 3, 00:11:46.684 "base_bdevs_list": [ 00:11:46.684 { 00:11:46.684 "name": "BaseBdev1", 00:11:46.684 "uuid": "64158338-4212-44c3-89be-3a3d16605a05", 00:11:46.684 "is_configured": true, 00:11:46.684 "data_offset": 2048, 00:11:46.684 "data_size": 63488 00:11:46.684 }, 00:11:46.684 { 00:11:46.684 "name": "BaseBdev2", 00:11:46.684 "uuid": "ab56b5c9-f553-4ef3-9377-b6ca271110c1", 00:11:46.684 "is_configured": true, 00:11:46.684 "data_offset": 2048, 00:11:46.684 "data_size": 63488 00:11:46.684 }, 00:11:46.684 { 00:11:46.684 "name": "BaseBdev3", 00:11:46.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.684 "is_configured": false, 00:11:46.684 "data_offset": 0, 00:11:46.684 "data_size": 0 00:11:46.684 } 00:11:46.684 ] 00:11:46.684 }' 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.684 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.251 [2024-10-17 20:08:32.785690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.251 [2024-10-17 20:08:32.786074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:47.251 [2024-10-17 20:08:32.786106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:47.251 BaseBdev3 00:11:47.251 [2024-10-17 20:08:32.786450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:47.251 [2024-10-17 20:08:32.786666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:47.251 [2024-10-17 20:08:32.786682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:47.251 [2024-10-17 20:08:32.786866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.251 [ 00:11:47.251 { 00:11:47.251 "name": "BaseBdev3", 00:11:47.251 "aliases": [ 00:11:47.251 "91f720e9-9ddb-4b6c-a0ae-256e06d22d5f" 00:11:47.251 ], 00:11:47.251 "product_name": "Malloc disk", 00:11:47.251 "block_size": 512, 00:11:47.251 "num_blocks": 65536, 00:11:47.251 "uuid": "91f720e9-9ddb-4b6c-a0ae-256e06d22d5f", 00:11:47.251 "assigned_rate_limits": { 00:11:47.251 "rw_ios_per_sec": 0, 00:11:47.251 "rw_mbytes_per_sec": 0, 00:11:47.251 "r_mbytes_per_sec": 0, 00:11:47.251 "w_mbytes_per_sec": 0 00:11:47.251 }, 00:11:47.251 "claimed": true, 00:11:47.251 "claim_type": "exclusive_write", 00:11:47.251 "zoned": false, 00:11:47.251 "supported_io_types": { 00:11:47.251 "read": true, 00:11:47.251 "write": true, 00:11:47.251 "unmap": true, 00:11:47.251 "flush": true, 00:11:47.251 "reset": true, 00:11:47.251 "nvme_admin": false, 00:11:47.251 "nvme_io": false, 00:11:47.251 "nvme_io_md": false, 00:11:47.251 "write_zeroes": true, 00:11:47.251 "zcopy": true, 00:11:47.251 "get_zone_info": false, 00:11:47.251 "zone_management": false, 00:11:47.251 "zone_append": false, 00:11:47.251 "compare": false, 00:11:47.251 "compare_and_write": false, 00:11:47.251 "abort": true, 00:11:47.251 "seek_hole": false, 00:11:47.251 "seek_data": false, 00:11:47.251 "copy": true, 00:11:47.251 "nvme_iov_md": false 00:11:47.251 }, 00:11:47.251 "memory_domains": [ 00:11:47.251 { 00:11:47.251 "dma_device_id": "system", 00:11:47.251 "dma_device_type": 1 00:11:47.251 }, 00:11:47.251 { 00:11:47.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.251 "dma_device_type": 2 00:11:47.251 } 00:11:47.251 ], 00:11:47.251 "driver_specific": {} 00:11:47.251 } 00:11:47.251 ] 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.251 "name": "Existed_Raid", 00:11:47.251 "uuid": "8975e78a-31df-4fa5-ab26-774e67cea006", 00:11:47.251 "strip_size_kb": 0, 00:11:47.251 "state": "online", 00:11:47.251 "raid_level": "raid1", 00:11:47.251 "superblock": true, 00:11:47.251 "num_base_bdevs": 3, 00:11:47.251 "num_base_bdevs_discovered": 3, 00:11:47.251 "num_base_bdevs_operational": 3, 00:11:47.251 "base_bdevs_list": [ 00:11:47.251 { 00:11:47.251 "name": "BaseBdev1", 00:11:47.251 "uuid": "64158338-4212-44c3-89be-3a3d16605a05", 00:11:47.251 "is_configured": true, 00:11:47.251 "data_offset": 2048, 00:11:47.251 "data_size": 63488 00:11:47.251 }, 00:11:47.251 { 00:11:47.251 "name": "BaseBdev2", 00:11:47.251 "uuid": "ab56b5c9-f553-4ef3-9377-b6ca271110c1", 00:11:47.251 "is_configured": true, 00:11:47.251 "data_offset": 2048, 00:11:47.251 "data_size": 63488 00:11:47.251 }, 00:11:47.251 { 00:11:47.251 "name": "BaseBdev3", 00:11:47.251 "uuid": "91f720e9-9ddb-4b6c-a0ae-256e06d22d5f", 00:11:47.251 "is_configured": true, 00:11:47.251 "data_offset": 2048, 00:11:47.251 "data_size": 63488 00:11:47.251 } 00:11:47.251 ] 00:11:47.251 }' 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.251 20:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.853 [2024-10-17 20:08:33.314341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.853 "name": "Existed_Raid", 00:11:47.853 "aliases": [ 00:11:47.853 "8975e78a-31df-4fa5-ab26-774e67cea006" 00:11:47.853 ], 00:11:47.853 "product_name": "Raid Volume", 00:11:47.853 "block_size": 512, 00:11:47.853 "num_blocks": 63488, 00:11:47.853 "uuid": "8975e78a-31df-4fa5-ab26-774e67cea006", 00:11:47.853 "assigned_rate_limits": { 00:11:47.853 "rw_ios_per_sec": 0, 00:11:47.853 "rw_mbytes_per_sec": 0, 00:11:47.853 "r_mbytes_per_sec": 0, 00:11:47.853 "w_mbytes_per_sec": 0 00:11:47.853 }, 00:11:47.853 "claimed": false, 00:11:47.853 "zoned": false, 00:11:47.853 "supported_io_types": { 00:11:47.853 "read": true, 00:11:47.853 "write": true, 00:11:47.853 "unmap": false, 00:11:47.853 "flush": false, 00:11:47.853 "reset": true, 00:11:47.853 "nvme_admin": false, 00:11:47.853 "nvme_io": false, 00:11:47.853 "nvme_io_md": false, 00:11:47.853 "write_zeroes": true, 00:11:47.853 "zcopy": false, 00:11:47.853 "get_zone_info": false, 00:11:47.853 "zone_management": false, 00:11:47.853 "zone_append": false, 00:11:47.853 "compare": false, 00:11:47.853 "compare_and_write": false, 00:11:47.853 "abort": false, 00:11:47.853 "seek_hole": false, 00:11:47.853 "seek_data": false, 00:11:47.853 "copy": false, 00:11:47.853 "nvme_iov_md": false 00:11:47.853 }, 00:11:47.853 "memory_domains": [ 00:11:47.853 { 00:11:47.853 "dma_device_id": "system", 00:11:47.853 "dma_device_type": 1 00:11:47.853 }, 00:11:47.853 { 00:11:47.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.853 "dma_device_type": 2 00:11:47.853 }, 00:11:47.853 { 00:11:47.853 "dma_device_id": "system", 00:11:47.853 "dma_device_type": 1 00:11:47.853 }, 00:11:47.853 { 00:11:47.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.853 "dma_device_type": 2 00:11:47.853 }, 00:11:47.853 { 00:11:47.853 "dma_device_id": "system", 00:11:47.853 "dma_device_type": 1 00:11:47.853 }, 00:11:47.853 { 00:11:47.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.853 "dma_device_type": 2 00:11:47.853 } 00:11:47.853 ], 00:11:47.853 "driver_specific": { 00:11:47.853 "raid": { 00:11:47.853 "uuid": "8975e78a-31df-4fa5-ab26-774e67cea006", 00:11:47.853 "strip_size_kb": 0, 00:11:47.853 "state": "online", 00:11:47.853 "raid_level": "raid1", 00:11:47.853 "superblock": true, 00:11:47.853 "num_base_bdevs": 3, 00:11:47.853 "num_base_bdevs_discovered": 3, 00:11:47.853 "num_base_bdevs_operational": 3, 00:11:47.853 "base_bdevs_list": [ 00:11:47.853 { 00:11:47.853 "name": "BaseBdev1", 00:11:47.853 "uuid": "64158338-4212-44c3-89be-3a3d16605a05", 00:11:47.853 "is_configured": true, 00:11:47.853 "data_offset": 2048, 00:11:47.853 "data_size": 63488 00:11:47.853 }, 00:11:47.853 { 00:11:47.853 "name": "BaseBdev2", 00:11:47.853 "uuid": "ab56b5c9-f553-4ef3-9377-b6ca271110c1", 00:11:47.853 "is_configured": true, 00:11:47.853 "data_offset": 2048, 00:11:47.853 "data_size": 63488 00:11:47.853 }, 00:11:47.853 { 00:11:47.853 "name": "BaseBdev3", 00:11:47.853 "uuid": "91f720e9-9ddb-4b6c-a0ae-256e06d22d5f", 00:11:47.853 "is_configured": true, 00:11:47.853 "data_offset": 2048, 00:11:47.853 "data_size": 63488 00:11:47.853 } 00:11:47.853 ] 00:11:47.853 } 00:11:47.853 } 00:11:47.853 }' 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:47.853 BaseBdev2 00:11:47.853 BaseBdev3' 00:11:47.853 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.854 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.854 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.854 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:47.854 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.854 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.854 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.854 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.113 [2024-10-17 20:08:33.626058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.113 "name": "Existed_Raid", 00:11:48.113 "uuid": "8975e78a-31df-4fa5-ab26-774e67cea006", 00:11:48.113 "strip_size_kb": 0, 00:11:48.113 "state": "online", 00:11:48.113 "raid_level": "raid1", 00:11:48.113 "superblock": true, 00:11:48.113 "num_base_bdevs": 3, 00:11:48.113 "num_base_bdevs_discovered": 2, 00:11:48.113 "num_base_bdevs_operational": 2, 00:11:48.113 "base_bdevs_list": [ 00:11:48.113 { 00:11:48.113 "name": null, 00:11:48.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.113 "is_configured": false, 00:11:48.113 "data_offset": 0, 00:11:48.113 "data_size": 63488 00:11:48.113 }, 00:11:48.113 { 00:11:48.113 "name": "BaseBdev2", 00:11:48.113 "uuid": "ab56b5c9-f553-4ef3-9377-b6ca271110c1", 00:11:48.113 "is_configured": true, 00:11:48.113 "data_offset": 2048, 00:11:48.113 "data_size": 63488 00:11:48.113 }, 00:11:48.113 { 00:11:48.113 "name": "BaseBdev3", 00:11:48.113 "uuid": "91f720e9-9ddb-4b6c-a0ae-256e06d22d5f", 00:11:48.113 "is_configured": true, 00:11:48.113 "data_offset": 2048, 00:11:48.113 "data_size": 63488 00:11:48.113 } 00:11:48.113 ] 00:11:48.113 }' 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.113 20:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.680 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:48.680 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.680 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.680 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.680 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.680 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.680 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.680 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.680 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.680 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:48.680 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.680 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.680 [2024-10-17 20:08:34.311866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:48.938 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.938 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.938 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.938 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.938 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.938 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.938 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.938 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.938 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.938 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.938 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:48.938 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.938 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.938 [2024-10-17 20:08:34.457578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.938 [2024-10-17 20:08:34.457742] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.939 [2024-10-17 20:08:34.543520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.939 [2024-10-17 20:08:34.543589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.939 [2024-10-17 20:08:34.543609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:48.939 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.939 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.939 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.939 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.939 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.939 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:48.939 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.939 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.197 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:49.197 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:49.197 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:49.197 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:49.197 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.197 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:49.197 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.197 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.197 BaseBdev2 00:11:49.197 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.197 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:49.197 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.198 [ 00:11:49.198 { 00:11:49.198 "name": "BaseBdev2", 00:11:49.198 "aliases": [ 00:11:49.198 "cb395c06-67ca-4678-8a8f-8e4fe7e40df3" 00:11:49.198 ], 00:11:49.198 "product_name": "Malloc disk", 00:11:49.198 "block_size": 512, 00:11:49.198 "num_blocks": 65536, 00:11:49.198 "uuid": "cb395c06-67ca-4678-8a8f-8e4fe7e40df3", 00:11:49.198 "assigned_rate_limits": { 00:11:49.198 "rw_ios_per_sec": 0, 00:11:49.198 "rw_mbytes_per_sec": 0, 00:11:49.198 "r_mbytes_per_sec": 0, 00:11:49.198 "w_mbytes_per_sec": 0 00:11:49.198 }, 00:11:49.198 "claimed": false, 00:11:49.198 "zoned": false, 00:11:49.198 "supported_io_types": { 00:11:49.198 "read": true, 00:11:49.198 "write": true, 00:11:49.198 "unmap": true, 00:11:49.198 "flush": true, 00:11:49.198 "reset": true, 00:11:49.198 "nvme_admin": false, 00:11:49.198 "nvme_io": false, 00:11:49.198 "nvme_io_md": false, 00:11:49.198 "write_zeroes": true, 00:11:49.198 "zcopy": true, 00:11:49.198 "get_zone_info": false, 00:11:49.198 "zone_management": false, 00:11:49.198 "zone_append": false, 00:11:49.198 "compare": false, 00:11:49.198 "compare_and_write": false, 00:11:49.198 "abort": true, 00:11:49.198 "seek_hole": false, 00:11:49.198 "seek_data": false, 00:11:49.198 "copy": true, 00:11:49.198 "nvme_iov_md": false 00:11:49.198 }, 00:11:49.198 "memory_domains": [ 00:11:49.198 { 00:11:49.198 "dma_device_id": "system", 00:11:49.198 "dma_device_type": 1 00:11:49.198 }, 00:11:49.198 { 00:11:49.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.198 "dma_device_type": 2 00:11:49.198 } 00:11:49.198 ], 00:11:49.198 "driver_specific": {} 00:11:49.198 } 00:11:49.198 ] 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.198 BaseBdev3 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.198 [ 00:11:49.198 { 00:11:49.198 "name": "BaseBdev3", 00:11:49.198 "aliases": [ 00:11:49.198 "619e911f-dc3f-43b1-8400-96a47a07b4d8" 00:11:49.198 ], 00:11:49.198 "product_name": "Malloc disk", 00:11:49.198 "block_size": 512, 00:11:49.198 "num_blocks": 65536, 00:11:49.198 "uuid": "619e911f-dc3f-43b1-8400-96a47a07b4d8", 00:11:49.198 "assigned_rate_limits": { 00:11:49.198 "rw_ios_per_sec": 0, 00:11:49.198 "rw_mbytes_per_sec": 0, 00:11:49.198 "r_mbytes_per_sec": 0, 00:11:49.198 "w_mbytes_per_sec": 0 00:11:49.198 }, 00:11:49.198 "claimed": false, 00:11:49.198 "zoned": false, 00:11:49.198 "supported_io_types": { 00:11:49.198 "read": true, 00:11:49.198 "write": true, 00:11:49.198 "unmap": true, 00:11:49.198 "flush": true, 00:11:49.198 "reset": true, 00:11:49.198 "nvme_admin": false, 00:11:49.198 "nvme_io": false, 00:11:49.198 "nvme_io_md": false, 00:11:49.198 "write_zeroes": true, 00:11:49.198 "zcopy": true, 00:11:49.198 "get_zone_info": false, 00:11:49.198 "zone_management": false, 00:11:49.198 "zone_append": false, 00:11:49.198 "compare": false, 00:11:49.198 "compare_and_write": false, 00:11:49.198 "abort": true, 00:11:49.198 "seek_hole": false, 00:11:49.198 "seek_data": false, 00:11:49.198 "copy": true, 00:11:49.198 "nvme_iov_md": false 00:11:49.198 }, 00:11:49.198 "memory_domains": [ 00:11:49.198 { 00:11:49.198 "dma_device_id": "system", 00:11:49.198 "dma_device_type": 1 00:11:49.198 }, 00:11:49.198 { 00:11:49.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.198 "dma_device_type": 2 00:11:49.198 } 00:11:49.198 ], 00:11:49.198 "driver_specific": {} 00:11:49.198 } 00:11:49.198 ] 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.198 [2024-10-17 20:08:34.765571] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.198 [2024-10-17 20:08:34.765810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.198 [2024-10-17 20:08:34.765851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.198 [2024-10-17 20:08:34.768441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.198 "name": "Existed_Raid", 00:11:49.198 "uuid": "97a8ada1-e956-4b7d-8cfa-29ee0c3c124c", 00:11:49.198 "strip_size_kb": 0, 00:11:49.198 "state": "configuring", 00:11:49.198 "raid_level": "raid1", 00:11:49.198 "superblock": true, 00:11:49.198 "num_base_bdevs": 3, 00:11:49.198 "num_base_bdevs_discovered": 2, 00:11:49.198 "num_base_bdevs_operational": 3, 00:11:49.198 "base_bdevs_list": [ 00:11:49.198 { 00:11:49.198 "name": "BaseBdev1", 00:11:49.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.198 "is_configured": false, 00:11:49.198 "data_offset": 0, 00:11:49.198 "data_size": 0 00:11:49.198 }, 00:11:49.198 { 00:11:49.198 "name": "BaseBdev2", 00:11:49.198 "uuid": "cb395c06-67ca-4678-8a8f-8e4fe7e40df3", 00:11:49.198 "is_configured": true, 00:11:49.198 "data_offset": 2048, 00:11:49.198 "data_size": 63488 00:11:49.198 }, 00:11:49.198 { 00:11:49.198 "name": "BaseBdev3", 00:11:49.198 "uuid": "619e911f-dc3f-43b1-8400-96a47a07b4d8", 00:11:49.198 "is_configured": true, 00:11:49.198 "data_offset": 2048, 00:11:49.198 "data_size": 63488 00:11:49.198 } 00:11:49.198 ] 00:11:49.198 }' 00:11:49.198 20:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.199 20:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.764 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:49.764 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.764 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.764 [2024-10-17 20:08:35.309833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:49.764 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.764 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:49.764 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.765 "name": "Existed_Raid", 00:11:49.765 "uuid": "97a8ada1-e956-4b7d-8cfa-29ee0c3c124c", 00:11:49.765 "strip_size_kb": 0, 00:11:49.765 "state": "configuring", 00:11:49.765 "raid_level": "raid1", 00:11:49.765 "superblock": true, 00:11:49.765 "num_base_bdevs": 3, 00:11:49.765 "num_base_bdevs_discovered": 1, 00:11:49.765 "num_base_bdevs_operational": 3, 00:11:49.765 "base_bdevs_list": [ 00:11:49.765 { 00:11:49.765 "name": "BaseBdev1", 00:11:49.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.765 "is_configured": false, 00:11:49.765 "data_offset": 0, 00:11:49.765 "data_size": 0 00:11:49.765 }, 00:11:49.765 { 00:11:49.765 "name": null, 00:11:49.765 "uuid": "cb395c06-67ca-4678-8a8f-8e4fe7e40df3", 00:11:49.765 "is_configured": false, 00:11:49.765 "data_offset": 0, 00:11:49.765 "data_size": 63488 00:11:49.765 }, 00:11:49.765 { 00:11:49.765 "name": "BaseBdev3", 00:11:49.765 "uuid": "619e911f-dc3f-43b1-8400-96a47a07b4d8", 00:11:49.765 "is_configured": true, 00:11:49.765 "data_offset": 2048, 00:11:49.765 "data_size": 63488 00:11:49.765 } 00:11:49.765 ] 00:11:49.765 }' 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.765 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.331 [2024-10-17 20:08:35.896852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.331 BaseBdev1 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:50.331 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.332 [ 00:11:50.332 { 00:11:50.332 "name": "BaseBdev1", 00:11:50.332 "aliases": [ 00:11:50.332 "72570297-fba7-4c7e-a67a-9f65e894eb67" 00:11:50.332 ], 00:11:50.332 "product_name": "Malloc disk", 00:11:50.332 "block_size": 512, 00:11:50.332 "num_blocks": 65536, 00:11:50.332 "uuid": "72570297-fba7-4c7e-a67a-9f65e894eb67", 00:11:50.332 "assigned_rate_limits": { 00:11:50.332 "rw_ios_per_sec": 0, 00:11:50.332 "rw_mbytes_per_sec": 0, 00:11:50.332 "r_mbytes_per_sec": 0, 00:11:50.332 "w_mbytes_per_sec": 0 00:11:50.332 }, 00:11:50.332 "claimed": true, 00:11:50.332 "claim_type": "exclusive_write", 00:11:50.332 "zoned": false, 00:11:50.332 "supported_io_types": { 00:11:50.332 "read": true, 00:11:50.332 "write": true, 00:11:50.332 "unmap": true, 00:11:50.332 "flush": true, 00:11:50.332 "reset": true, 00:11:50.332 "nvme_admin": false, 00:11:50.332 "nvme_io": false, 00:11:50.332 "nvme_io_md": false, 00:11:50.332 "write_zeroes": true, 00:11:50.332 "zcopy": true, 00:11:50.332 "get_zone_info": false, 00:11:50.332 "zone_management": false, 00:11:50.332 "zone_append": false, 00:11:50.332 "compare": false, 00:11:50.332 "compare_and_write": false, 00:11:50.332 "abort": true, 00:11:50.332 "seek_hole": false, 00:11:50.332 "seek_data": false, 00:11:50.332 "copy": true, 00:11:50.332 "nvme_iov_md": false 00:11:50.332 }, 00:11:50.332 "memory_domains": [ 00:11:50.332 { 00:11:50.332 "dma_device_id": "system", 00:11:50.332 "dma_device_type": 1 00:11:50.332 }, 00:11:50.332 { 00:11:50.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.332 "dma_device_type": 2 00:11:50.332 } 00:11:50.332 ], 00:11:50.332 "driver_specific": {} 00:11:50.332 } 00:11:50.332 ] 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.332 "name": "Existed_Raid", 00:11:50.332 "uuid": "97a8ada1-e956-4b7d-8cfa-29ee0c3c124c", 00:11:50.332 "strip_size_kb": 0, 00:11:50.332 "state": "configuring", 00:11:50.332 "raid_level": "raid1", 00:11:50.332 "superblock": true, 00:11:50.332 "num_base_bdevs": 3, 00:11:50.332 "num_base_bdevs_discovered": 2, 00:11:50.332 "num_base_bdevs_operational": 3, 00:11:50.332 "base_bdevs_list": [ 00:11:50.332 { 00:11:50.332 "name": "BaseBdev1", 00:11:50.332 "uuid": "72570297-fba7-4c7e-a67a-9f65e894eb67", 00:11:50.332 "is_configured": true, 00:11:50.332 "data_offset": 2048, 00:11:50.332 "data_size": 63488 00:11:50.332 }, 00:11:50.332 { 00:11:50.332 "name": null, 00:11:50.332 "uuid": "cb395c06-67ca-4678-8a8f-8e4fe7e40df3", 00:11:50.332 "is_configured": false, 00:11:50.332 "data_offset": 0, 00:11:50.332 "data_size": 63488 00:11:50.332 }, 00:11:50.332 { 00:11:50.332 "name": "BaseBdev3", 00:11:50.332 "uuid": "619e911f-dc3f-43b1-8400-96a47a07b4d8", 00:11:50.332 "is_configured": true, 00:11:50.332 "data_offset": 2048, 00:11:50.332 "data_size": 63488 00:11:50.332 } 00:11:50.332 ] 00:11:50.332 }' 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.332 20:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.898 [2024-10-17 20:08:36.517066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.898 20:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.156 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.156 "name": "Existed_Raid", 00:11:51.156 "uuid": "97a8ada1-e956-4b7d-8cfa-29ee0c3c124c", 00:11:51.156 "strip_size_kb": 0, 00:11:51.156 "state": "configuring", 00:11:51.156 "raid_level": "raid1", 00:11:51.156 "superblock": true, 00:11:51.156 "num_base_bdevs": 3, 00:11:51.156 "num_base_bdevs_discovered": 1, 00:11:51.156 "num_base_bdevs_operational": 3, 00:11:51.156 "base_bdevs_list": [ 00:11:51.156 { 00:11:51.156 "name": "BaseBdev1", 00:11:51.156 "uuid": "72570297-fba7-4c7e-a67a-9f65e894eb67", 00:11:51.156 "is_configured": true, 00:11:51.156 "data_offset": 2048, 00:11:51.156 "data_size": 63488 00:11:51.156 }, 00:11:51.156 { 00:11:51.156 "name": null, 00:11:51.156 "uuid": "cb395c06-67ca-4678-8a8f-8e4fe7e40df3", 00:11:51.156 "is_configured": false, 00:11:51.156 "data_offset": 0, 00:11:51.156 "data_size": 63488 00:11:51.156 }, 00:11:51.156 { 00:11:51.156 "name": null, 00:11:51.156 "uuid": "619e911f-dc3f-43b1-8400-96a47a07b4d8", 00:11:51.156 "is_configured": false, 00:11:51.156 "data_offset": 0, 00:11:51.156 "data_size": 63488 00:11:51.156 } 00:11:51.156 ] 00:11:51.156 }' 00:11:51.156 20:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.156 20:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.415 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.415 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.415 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:51.415 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.738 [2024-10-17 20:08:37.109305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.738 "name": "Existed_Raid", 00:11:51.738 "uuid": "97a8ada1-e956-4b7d-8cfa-29ee0c3c124c", 00:11:51.738 "strip_size_kb": 0, 00:11:51.738 "state": "configuring", 00:11:51.738 "raid_level": "raid1", 00:11:51.738 "superblock": true, 00:11:51.738 "num_base_bdevs": 3, 00:11:51.738 "num_base_bdevs_discovered": 2, 00:11:51.738 "num_base_bdevs_operational": 3, 00:11:51.738 "base_bdevs_list": [ 00:11:51.738 { 00:11:51.738 "name": "BaseBdev1", 00:11:51.738 "uuid": "72570297-fba7-4c7e-a67a-9f65e894eb67", 00:11:51.738 "is_configured": true, 00:11:51.738 "data_offset": 2048, 00:11:51.738 "data_size": 63488 00:11:51.738 }, 00:11:51.738 { 00:11:51.738 "name": null, 00:11:51.738 "uuid": "cb395c06-67ca-4678-8a8f-8e4fe7e40df3", 00:11:51.738 "is_configured": false, 00:11:51.738 "data_offset": 0, 00:11:51.738 "data_size": 63488 00:11:51.738 }, 00:11:51.738 { 00:11:51.738 "name": "BaseBdev3", 00:11:51.738 "uuid": "619e911f-dc3f-43b1-8400-96a47a07b4d8", 00:11:51.738 "is_configured": true, 00:11:51.738 "data_offset": 2048, 00:11:51.738 "data_size": 63488 00:11:51.738 } 00:11:51.738 ] 00:11:51.738 }' 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.738 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.319 [2024-10-17 20:08:37.721946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.319 "name": "Existed_Raid", 00:11:52.319 "uuid": "97a8ada1-e956-4b7d-8cfa-29ee0c3c124c", 00:11:52.319 "strip_size_kb": 0, 00:11:52.319 "state": "configuring", 00:11:52.319 "raid_level": "raid1", 00:11:52.319 "superblock": true, 00:11:52.319 "num_base_bdevs": 3, 00:11:52.319 "num_base_bdevs_discovered": 1, 00:11:52.319 "num_base_bdevs_operational": 3, 00:11:52.319 "base_bdevs_list": [ 00:11:52.319 { 00:11:52.319 "name": null, 00:11:52.319 "uuid": "72570297-fba7-4c7e-a67a-9f65e894eb67", 00:11:52.319 "is_configured": false, 00:11:52.319 "data_offset": 0, 00:11:52.319 "data_size": 63488 00:11:52.319 }, 00:11:52.319 { 00:11:52.319 "name": null, 00:11:52.319 "uuid": "cb395c06-67ca-4678-8a8f-8e4fe7e40df3", 00:11:52.319 "is_configured": false, 00:11:52.319 "data_offset": 0, 00:11:52.319 "data_size": 63488 00:11:52.319 }, 00:11:52.319 { 00:11:52.319 "name": "BaseBdev3", 00:11:52.319 "uuid": "619e911f-dc3f-43b1-8400-96a47a07b4d8", 00:11:52.319 "is_configured": true, 00:11:52.319 "data_offset": 2048, 00:11:52.319 "data_size": 63488 00:11:52.319 } 00:11:52.319 ] 00:11:52.319 }' 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.319 20:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.887 [2024-10-17 20:08:38.395136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.887 "name": "Existed_Raid", 00:11:52.887 "uuid": "97a8ada1-e956-4b7d-8cfa-29ee0c3c124c", 00:11:52.887 "strip_size_kb": 0, 00:11:52.887 "state": "configuring", 00:11:52.887 "raid_level": "raid1", 00:11:52.887 "superblock": true, 00:11:52.887 "num_base_bdevs": 3, 00:11:52.887 "num_base_bdevs_discovered": 2, 00:11:52.887 "num_base_bdevs_operational": 3, 00:11:52.887 "base_bdevs_list": [ 00:11:52.887 { 00:11:52.887 "name": null, 00:11:52.887 "uuid": "72570297-fba7-4c7e-a67a-9f65e894eb67", 00:11:52.887 "is_configured": false, 00:11:52.887 "data_offset": 0, 00:11:52.887 "data_size": 63488 00:11:52.887 }, 00:11:52.887 { 00:11:52.887 "name": "BaseBdev2", 00:11:52.887 "uuid": "cb395c06-67ca-4678-8a8f-8e4fe7e40df3", 00:11:52.887 "is_configured": true, 00:11:52.887 "data_offset": 2048, 00:11:52.887 "data_size": 63488 00:11:52.887 }, 00:11:52.887 { 00:11:52.887 "name": "BaseBdev3", 00:11:52.887 "uuid": "619e911f-dc3f-43b1-8400-96a47a07b4d8", 00:11:52.887 "is_configured": true, 00:11:52.887 "data_offset": 2048, 00:11:52.887 "data_size": 63488 00:11:52.887 } 00:11:52.887 ] 00:11:52.887 }' 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.887 20:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.455 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.455 20:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:53.455 20:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.455 20:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.455 20:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 72570297-fba7-4c7e-a67a-9f65e894eb67 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.455 [2024-10-17 20:08:39.101792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:53.455 [2024-10-17 20:08:39.102138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:53.455 [2024-10-17 20:08:39.102158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.455 [2024-10-17 20:08:39.102475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:53.455 NewBaseBdev 00:11:53.455 [2024-10-17 20:08:39.102674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:53.455 [2024-10-17 20:08:39.102697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:53.455 [2024-10-17 20:08:39.102861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.455 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.714 [ 00:11:53.714 { 00:11:53.714 "name": "NewBaseBdev", 00:11:53.714 "aliases": [ 00:11:53.714 "72570297-fba7-4c7e-a67a-9f65e894eb67" 00:11:53.714 ], 00:11:53.714 "product_name": "Malloc disk", 00:11:53.714 "block_size": 512, 00:11:53.714 "num_blocks": 65536, 00:11:53.714 "uuid": "72570297-fba7-4c7e-a67a-9f65e894eb67", 00:11:53.714 "assigned_rate_limits": { 00:11:53.714 "rw_ios_per_sec": 0, 00:11:53.714 "rw_mbytes_per_sec": 0, 00:11:53.714 "r_mbytes_per_sec": 0, 00:11:53.714 "w_mbytes_per_sec": 0 00:11:53.714 }, 00:11:53.714 "claimed": true, 00:11:53.714 "claim_type": "exclusive_write", 00:11:53.714 "zoned": false, 00:11:53.714 "supported_io_types": { 00:11:53.714 "read": true, 00:11:53.714 "write": true, 00:11:53.714 "unmap": true, 00:11:53.714 "flush": true, 00:11:53.714 "reset": true, 00:11:53.714 "nvme_admin": false, 00:11:53.714 "nvme_io": false, 00:11:53.714 "nvme_io_md": false, 00:11:53.714 "write_zeroes": true, 00:11:53.714 "zcopy": true, 00:11:53.714 "get_zone_info": false, 00:11:53.714 "zone_management": false, 00:11:53.714 "zone_append": false, 00:11:53.714 "compare": false, 00:11:53.714 "compare_and_write": false, 00:11:53.714 "abort": true, 00:11:53.714 "seek_hole": false, 00:11:53.714 "seek_data": false, 00:11:53.714 "copy": true, 00:11:53.714 "nvme_iov_md": false 00:11:53.714 }, 00:11:53.714 "memory_domains": [ 00:11:53.714 { 00:11:53.714 "dma_device_id": "system", 00:11:53.714 "dma_device_type": 1 00:11:53.714 }, 00:11:53.714 { 00:11:53.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.714 "dma_device_type": 2 00:11:53.714 } 00:11:53.714 ], 00:11:53.714 "driver_specific": {} 00:11:53.714 } 00:11:53.714 ] 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.714 "name": "Existed_Raid", 00:11:53.714 "uuid": "97a8ada1-e956-4b7d-8cfa-29ee0c3c124c", 00:11:53.714 "strip_size_kb": 0, 00:11:53.714 "state": "online", 00:11:53.714 "raid_level": "raid1", 00:11:53.714 "superblock": true, 00:11:53.714 "num_base_bdevs": 3, 00:11:53.714 "num_base_bdevs_discovered": 3, 00:11:53.714 "num_base_bdevs_operational": 3, 00:11:53.714 "base_bdevs_list": [ 00:11:53.714 { 00:11:53.714 "name": "NewBaseBdev", 00:11:53.714 "uuid": "72570297-fba7-4c7e-a67a-9f65e894eb67", 00:11:53.714 "is_configured": true, 00:11:53.714 "data_offset": 2048, 00:11:53.714 "data_size": 63488 00:11:53.714 }, 00:11:53.714 { 00:11:53.714 "name": "BaseBdev2", 00:11:53.714 "uuid": "cb395c06-67ca-4678-8a8f-8e4fe7e40df3", 00:11:53.714 "is_configured": true, 00:11:53.714 "data_offset": 2048, 00:11:53.714 "data_size": 63488 00:11:53.714 }, 00:11:53.714 { 00:11:53.714 "name": "BaseBdev3", 00:11:53.714 "uuid": "619e911f-dc3f-43b1-8400-96a47a07b4d8", 00:11:53.714 "is_configured": true, 00:11:53.714 "data_offset": 2048, 00:11:53.714 "data_size": 63488 00:11:53.714 } 00:11:53.714 ] 00:11:53.714 }' 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.714 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.282 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:54.282 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:54.282 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.282 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.282 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.282 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.282 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:54.282 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.282 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.282 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.282 [2024-10-17 20:08:39.670402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.282 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.282 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:54.282 "name": "Existed_Raid", 00:11:54.282 "aliases": [ 00:11:54.282 "97a8ada1-e956-4b7d-8cfa-29ee0c3c124c" 00:11:54.282 ], 00:11:54.282 "product_name": "Raid Volume", 00:11:54.282 "block_size": 512, 00:11:54.282 "num_blocks": 63488, 00:11:54.282 "uuid": "97a8ada1-e956-4b7d-8cfa-29ee0c3c124c", 00:11:54.282 "assigned_rate_limits": { 00:11:54.282 "rw_ios_per_sec": 0, 00:11:54.282 "rw_mbytes_per_sec": 0, 00:11:54.282 "r_mbytes_per_sec": 0, 00:11:54.282 "w_mbytes_per_sec": 0 00:11:54.282 }, 00:11:54.282 "claimed": false, 00:11:54.282 "zoned": false, 00:11:54.282 "supported_io_types": { 00:11:54.282 "read": true, 00:11:54.282 "write": true, 00:11:54.282 "unmap": false, 00:11:54.282 "flush": false, 00:11:54.282 "reset": true, 00:11:54.282 "nvme_admin": false, 00:11:54.282 "nvme_io": false, 00:11:54.282 "nvme_io_md": false, 00:11:54.282 "write_zeroes": true, 00:11:54.282 "zcopy": false, 00:11:54.282 "get_zone_info": false, 00:11:54.282 "zone_management": false, 00:11:54.282 "zone_append": false, 00:11:54.282 "compare": false, 00:11:54.282 "compare_and_write": false, 00:11:54.282 "abort": false, 00:11:54.282 "seek_hole": false, 00:11:54.282 "seek_data": false, 00:11:54.282 "copy": false, 00:11:54.282 "nvme_iov_md": false 00:11:54.282 }, 00:11:54.282 "memory_domains": [ 00:11:54.282 { 00:11:54.282 "dma_device_id": "system", 00:11:54.282 "dma_device_type": 1 00:11:54.282 }, 00:11:54.282 { 00:11:54.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.282 "dma_device_type": 2 00:11:54.282 }, 00:11:54.282 { 00:11:54.282 "dma_device_id": "system", 00:11:54.282 "dma_device_type": 1 00:11:54.282 }, 00:11:54.282 { 00:11:54.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.282 "dma_device_type": 2 00:11:54.282 }, 00:11:54.282 { 00:11:54.282 "dma_device_id": "system", 00:11:54.282 "dma_device_type": 1 00:11:54.282 }, 00:11:54.282 { 00:11:54.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.283 "dma_device_type": 2 00:11:54.283 } 00:11:54.283 ], 00:11:54.283 "driver_specific": { 00:11:54.283 "raid": { 00:11:54.283 "uuid": "97a8ada1-e956-4b7d-8cfa-29ee0c3c124c", 00:11:54.283 "strip_size_kb": 0, 00:11:54.283 "state": "online", 00:11:54.283 "raid_level": "raid1", 00:11:54.283 "superblock": true, 00:11:54.283 "num_base_bdevs": 3, 00:11:54.283 "num_base_bdevs_discovered": 3, 00:11:54.283 "num_base_bdevs_operational": 3, 00:11:54.283 "base_bdevs_list": [ 00:11:54.283 { 00:11:54.283 "name": "NewBaseBdev", 00:11:54.283 "uuid": "72570297-fba7-4c7e-a67a-9f65e894eb67", 00:11:54.283 "is_configured": true, 00:11:54.283 "data_offset": 2048, 00:11:54.283 "data_size": 63488 00:11:54.283 }, 00:11:54.283 { 00:11:54.283 "name": "BaseBdev2", 00:11:54.283 "uuid": "cb395c06-67ca-4678-8a8f-8e4fe7e40df3", 00:11:54.283 "is_configured": true, 00:11:54.283 "data_offset": 2048, 00:11:54.283 "data_size": 63488 00:11:54.283 }, 00:11:54.283 { 00:11:54.283 "name": "BaseBdev3", 00:11:54.283 "uuid": "619e911f-dc3f-43b1-8400-96a47a07b4d8", 00:11:54.283 "is_configured": true, 00:11:54.283 "data_offset": 2048, 00:11:54.283 "data_size": 63488 00:11:54.283 } 00:11:54.283 ] 00:11:54.283 } 00:11:54.283 } 00:11:54.283 }' 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:54.283 BaseBdev2 00:11:54.283 BaseBdev3' 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.283 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.542 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.542 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.542 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.542 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:54.542 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.542 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.543 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.543 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.543 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.543 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.543 20:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:54.543 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.543 20:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.543 [2024-10-17 20:08:39.998107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.543 [2024-10-17 20:08:39.998148] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.543 [2024-10-17 20:08:39.998239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.543 [2024-10-17 20:08:39.998649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.543 [2024-10-17 20:08:39.998669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:54.543 20:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.543 20:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67978 00:11:54.543 20:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 67978 ']' 00:11:54.543 20:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 67978 00:11:54.543 20:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:54.543 20:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:54.543 20:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67978 00:11:54.543 killing process with pid 67978 00:11:54.543 20:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:54.543 20:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:54.543 20:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67978' 00:11:54.543 20:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 67978 00:11:54.543 [2024-10-17 20:08:40.037595] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:54.543 20:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 67978 00:11:54.801 [2024-10-17 20:08:40.308982] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.736 ************************************ 00:11:55.736 END TEST raid_state_function_test_sb 00:11:55.736 ************************************ 00:11:55.736 20:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:55.736 00:11:55.736 real 0m11.990s 00:11:55.736 user 0m19.935s 00:11:55.736 sys 0m1.617s 00:11:55.736 20:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.736 20:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.737 20:08:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:55.737 20:08:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:55.737 20:08:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.737 20:08:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.996 ************************************ 00:11:55.996 START TEST raid_superblock_test 00:11:55.996 ************************************ 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68615 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68615 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 68615 ']' 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:55.996 20:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.996 [2024-10-17 20:08:41.502807] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:11:55.996 [2024-10-17 20:08:41.503364] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68615 ] 00:11:56.254 [2024-10-17 20:08:41.677410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.254 [2024-10-17 20:08:41.812138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.518 [2024-10-17 20:08:42.018877] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.518 [2024-10-17 20:08:42.018936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.092 malloc1 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.092 [2024-10-17 20:08:42.525396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:57.092 [2024-10-17 20:08:42.525647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.092 [2024-10-17 20:08:42.525729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:57.092 [2024-10-17 20:08:42.525900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.092 [2024-10-17 20:08:42.528799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.092 [2024-10-17 20:08:42.528981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:57.092 pt1 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.092 malloc2 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.092 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.093 [2024-10-17 20:08:42.581656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:57.093 [2024-10-17 20:08:42.581858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.093 [2024-10-17 20:08:42.581902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:57.093 [2024-10-17 20:08:42.581919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.093 [2024-10-17 20:08:42.584728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.093 [2024-10-17 20:08:42.584777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:57.093 pt2 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.093 malloc3 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.093 [2024-10-17 20:08:42.645666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:57.093 [2024-10-17 20:08:42.645893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.093 [2024-10-17 20:08:42.645978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:57.093 [2024-10-17 20:08:42.646248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.093 [2024-10-17 20:08:42.649268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.093 [2024-10-17 20:08:42.649431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:57.093 pt3 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.093 [2024-10-17 20:08:42.657829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:57.093 [2024-10-17 20:08:42.660609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:57.093 [2024-10-17 20:08:42.660864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:57.093 [2024-10-17 20:08:42.661296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:57.093 [2024-10-17 20:08:42.661347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:57.093 [2024-10-17 20:08:42.661747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:57.093 [2024-10-17 20:08:42.661989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:57.093 [2024-10-17 20:08:42.662014] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:57.093 [2024-10-17 20:08:42.662316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.093 "name": "raid_bdev1", 00:11:57.093 "uuid": "1435e738-da68-4c89-a550-98a78a63ff5d", 00:11:57.093 "strip_size_kb": 0, 00:11:57.093 "state": "online", 00:11:57.093 "raid_level": "raid1", 00:11:57.093 "superblock": true, 00:11:57.093 "num_base_bdevs": 3, 00:11:57.093 "num_base_bdevs_discovered": 3, 00:11:57.093 "num_base_bdevs_operational": 3, 00:11:57.093 "base_bdevs_list": [ 00:11:57.093 { 00:11:57.093 "name": "pt1", 00:11:57.093 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.093 "is_configured": true, 00:11:57.093 "data_offset": 2048, 00:11:57.093 "data_size": 63488 00:11:57.093 }, 00:11:57.093 { 00:11:57.093 "name": "pt2", 00:11:57.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.093 "is_configured": true, 00:11:57.093 "data_offset": 2048, 00:11:57.093 "data_size": 63488 00:11:57.093 }, 00:11:57.093 { 00:11:57.093 "name": "pt3", 00:11:57.093 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.093 "is_configured": true, 00:11:57.093 "data_offset": 2048, 00:11:57.093 "data_size": 63488 00:11:57.093 } 00:11:57.093 ] 00:11:57.093 }' 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.093 20:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.659 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:57.659 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:57.659 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:57.659 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:57.659 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:57.659 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:57.659 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.659 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.659 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.659 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:57.659 [2024-10-17 20:08:43.178785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.659 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.659 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.659 "name": "raid_bdev1", 00:11:57.659 "aliases": [ 00:11:57.659 "1435e738-da68-4c89-a550-98a78a63ff5d" 00:11:57.659 ], 00:11:57.659 "product_name": "Raid Volume", 00:11:57.659 "block_size": 512, 00:11:57.659 "num_blocks": 63488, 00:11:57.659 "uuid": "1435e738-da68-4c89-a550-98a78a63ff5d", 00:11:57.659 "assigned_rate_limits": { 00:11:57.659 "rw_ios_per_sec": 0, 00:11:57.659 "rw_mbytes_per_sec": 0, 00:11:57.659 "r_mbytes_per_sec": 0, 00:11:57.659 "w_mbytes_per_sec": 0 00:11:57.659 }, 00:11:57.659 "claimed": false, 00:11:57.659 "zoned": false, 00:11:57.659 "supported_io_types": { 00:11:57.659 "read": true, 00:11:57.659 "write": true, 00:11:57.659 "unmap": false, 00:11:57.659 "flush": false, 00:11:57.659 "reset": true, 00:11:57.659 "nvme_admin": false, 00:11:57.659 "nvme_io": false, 00:11:57.659 "nvme_io_md": false, 00:11:57.659 "write_zeroes": true, 00:11:57.659 "zcopy": false, 00:11:57.659 "get_zone_info": false, 00:11:57.659 "zone_management": false, 00:11:57.659 "zone_append": false, 00:11:57.659 "compare": false, 00:11:57.659 "compare_and_write": false, 00:11:57.659 "abort": false, 00:11:57.659 "seek_hole": false, 00:11:57.659 "seek_data": false, 00:11:57.659 "copy": false, 00:11:57.659 "nvme_iov_md": false 00:11:57.659 }, 00:11:57.659 "memory_domains": [ 00:11:57.659 { 00:11:57.659 "dma_device_id": "system", 00:11:57.659 "dma_device_type": 1 00:11:57.659 }, 00:11:57.659 { 00:11:57.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.659 "dma_device_type": 2 00:11:57.659 }, 00:11:57.659 { 00:11:57.659 "dma_device_id": "system", 00:11:57.659 "dma_device_type": 1 00:11:57.659 }, 00:11:57.659 { 00:11:57.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.659 "dma_device_type": 2 00:11:57.659 }, 00:11:57.659 { 00:11:57.659 "dma_device_id": "system", 00:11:57.659 "dma_device_type": 1 00:11:57.659 }, 00:11:57.659 { 00:11:57.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.659 "dma_device_type": 2 00:11:57.659 } 00:11:57.659 ], 00:11:57.659 "driver_specific": { 00:11:57.659 "raid": { 00:11:57.659 "uuid": "1435e738-da68-4c89-a550-98a78a63ff5d", 00:11:57.659 "strip_size_kb": 0, 00:11:57.659 "state": "online", 00:11:57.660 "raid_level": "raid1", 00:11:57.660 "superblock": true, 00:11:57.660 "num_base_bdevs": 3, 00:11:57.660 "num_base_bdevs_discovered": 3, 00:11:57.660 "num_base_bdevs_operational": 3, 00:11:57.660 "base_bdevs_list": [ 00:11:57.660 { 00:11:57.660 "name": "pt1", 00:11:57.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.660 "is_configured": true, 00:11:57.660 "data_offset": 2048, 00:11:57.660 "data_size": 63488 00:11:57.660 }, 00:11:57.660 { 00:11:57.660 "name": "pt2", 00:11:57.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.660 "is_configured": true, 00:11:57.660 "data_offset": 2048, 00:11:57.660 "data_size": 63488 00:11:57.660 }, 00:11:57.660 { 00:11:57.660 "name": "pt3", 00:11:57.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.660 "is_configured": true, 00:11:57.660 "data_offset": 2048, 00:11:57.660 "data_size": 63488 00:11:57.660 } 00:11:57.660 ] 00:11:57.660 } 00:11:57.660 } 00:11:57.660 }' 00:11:57.660 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.660 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:57.660 pt2 00:11:57.660 pt3' 00:11:57.660 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:57.918 [2024-10-17 20:08:43.510780] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1435e738-da68-4c89-a550-98a78a63ff5d 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1435e738-da68-4c89-a550-98a78a63ff5d ']' 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.918 [2024-10-17 20:08:43.562430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.918 [2024-10-17 20:08:43.562467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.918 [2024-10-17 20:08:43.562563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.918 [2024-10-17 20:08:43.562663] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.918 [2024-10-17 20:08:43.562680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:57.918 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.176 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.177 [2024-10-17 20:08:43.726545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:58.177 [2024-10-17 20:08:43.729059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:58.177 [2024-10-17 20:08:43.729282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:58.177 [2024-10-17 20:08:43.729371] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:58.177 [2024-10-17 20:08:43.729450] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:58.177 [2024-10-17 20:08:43.729487] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:58.177 [2024-10-17 20:08:43.729517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.177 [2024-10-17 20:08:43.729532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:58.177 request: 00:11:58.177 { 00:11:58.177 "name": "raid_bdev1", 00:11:58.177 "raid_level": "raid1", 00:11:58.177 "base_bdevs": [ 00:11:58.177 "malloc1", 00:11:58.177 "malloc2", 00:11:58.177 "malloc3" 00:11:58.177 ], 00:11:58.177 "superblock": false, 00:11:58.177 "method": "bdev_raid_create", 00:11:58.177 "req_id": 1 00:11:58.177 } 00:11:58.177 Got JSON-RPC error response 00:11:58.177 response: 00:11:58.177 { 00:11:58.177 "code": -17, 00:11:58.177 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:58.177 } 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.177 [2024-10-17 20:08:43.798482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:58.177 [2024-10-17 20:08:43.798687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.177 [2024-10-17 20:08:43.798771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:58.177 [2024-10-17 20:08:43.798893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.177 [2024-10-17 20:08:43.801822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.177 [2024-10-17 20:08:43.801983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:58.177 [2024-10-17 20:08:43.802220] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:58.177 [2024-10-17 20:08:43.802397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:58.177 pt1 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.177 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.435 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.435 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.435 "name": "raid_bdev1", 00:11:58.435 "uuid": "1435e738-da68-4c89-a550-98a78a63ff5d", 00:11:58.435 "strip_size_kb": 0, 00:11:58.435 "state": "configuring", 00:11:58.435 "raid_level": "raid1", 00:11:58.435 "superblock": true, 00:11:58.435 "num_base_bdevs": 3, 00:11:58.435 "num_base_bdevs_discovered": 1, 00:11:58.435 "num_base_bdevs_operational": 3, 00:11:58.435 "base_bdevs_list": [ 00:11:58.435 { 00:11:58.435 "name": "pt1", 00:11:58.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.435 "is_configured": true, 00:11:58.435 "data_offset": 2048, 00:11:58.435 "data_size": 63488 00:11:58.435 }, 00:11:58.435 { 00:11:58.435 "name": null, 00:11:58.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.435 "is_configured": false, 00:11:58.435 "data_offset": 2048, 00:11:58.435 "data_size": 63488 00:11:58.435 }, 00:11:58.435 { 00:11:58.435 "name": null, 00:11:58.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.435 "is_configured": false, 00:11:58.435 "data_offset": 2048, 00:11:58.435 "data_size": 63488 00:11:58.435 } 00:11:58.435 ] 00:11:58.435 }' 00:11:58.435 20:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.435 20:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.694 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:58.694 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:58.694 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.694 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.952 [2024-10-17 20:08:44.350971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:58.952 [2024-10-17 20:08:44.351102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.952 [2024-10-17 20:08:44.351141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:58.952 [2024-10-17 20:08:44.351157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.952 [2024-10-17 20:08:44.351780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.952 [2024-10-17 20:08:44.351814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:58.952 [2024-10-17 20:08:44.351974] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:58.952 [2024-10-17 20:08:44.352051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:58.952 pt2 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.952 [2024-10-17 20:08:44.363048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.952 "name": "raid_bdev1", 00:11:58.952 "uuid": "1435e738-da68-4c89-a550-98a78a63ff5d", 00:11:58.952 "strip_size_kb": 0, 00:11:58.952 "state": "configuring", 00:11:58.952 "raid_level": "raid1", 00:11:58.952 "superblock": true, 00:11:58.952 "num_base_bdevs": 3, 00:11:58.952 "num_base_bdevs_discovered": 1, 00:11:58.952 "num_base_bdevs_operational": 3, 00:11:58.952 "base_bdevs_list": [ 00:11:58.952 { 00:11:58.952 "name": "pt1", 00:11:58.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.952 "is_configured": true, 00:11:58.952 "data_offset": 2048, 00:11:58.952 "data_size": 63488 00:11:58.952 }, 00:11:58.952 { 00:11:58.952 "name": null, 00:11:58.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.952 "is_configured": false, 00:11:58.952 "data_offset": 0, 00:11:58.952 "data_size": 63488 00:11:58.952 }, 00:11:58.952 { 00:11:58.952 "name": null, 00:11:58.952 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.952 "is_configured": false, 00:11:58.952 "data_offset": 2048, 00:11:58.952 "data_size": 63488 00:11:58.952 } 00:11:58.952 ] 00:11:58.952 }' 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.952 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.519 [2024-10-17 20:08:44.879025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:59.519 [2024-10-17 20:08:44.879155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.519 [2024-10-17 20:08:44.879185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:59.519 [2024-10-17 20:08:44.879204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.519 [2024-10-17 20:08:44.879784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.519 [2024-10-17 20:08:44.879824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:59.519 [2024-10-17 20:08:44.879942] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:59.519 [2024-10-17 20:08:44.879996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:59.519 pt2 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.519 [2024-10-17 20:08:44.891053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:59.519 [2024-10-17 20:08:44.891112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.519 [2024-10-17 20:08:44.891143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:59.519 [2024-10-17 20:08:44.891163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.519 [2024-10-17 20:08:44.891616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.519 [2024-10-17 20:08:44.891658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:59.519 [2024-10-17 20:08:44.891737] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:59.519 [2024-10-17 20:08:44.891770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:59.519 [2024-10-17 20:08:44.891924] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:59.519 [2024-10-17 20:08:44.891947] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:59.519 [2024-10-17 20:08:44.892284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:59.519 [2024-10-17 20:08:44.892488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:59.519 [2024-10-17 20:08:44.892504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:59.519 [2024-10-17 20:08:44.892675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.519 pt3 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.519 "name": "raid_bdev1", 00:11:59.519 "uuid": "1435e738-da68-4c89-a550-98a78a63ff5d", 00:11:59.519 "strip_size_kb": 0, 00:11:59.519 "state": "online", 00:11:59.519 "raid_level": "raid1", 00:11:59.519 "superblock": true, 00:11:59.519 "num_base_bdevs": 3, 00:11:59.519 "num_base_bdevs_discovered": 3, 00:11:59.519 "num_base_bdevs_operational": 3, 00:11:59.519 "base_bdevs_list": [ 00:11:59.519 { 00:11:59.519 "name": "pt1", 00:11:59.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:59.519 "is_configured": true, 00:11:59.519 "data_offset": 2048, 00:11:59.519 "data_size": 63488 00:11:59.519 }, 00:11:59.519 { 00:11:59.519 "name": "pt2", 00:11:59.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.519 "is_configured": true, 00:11:59.519 "data_offset": 2048, 00:11:59.519 "data_size": 63488 00:11:59.519 }, 00:11:59.519 { 00:11:59.519 "name": "pt3", 00:11:59.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.519 "is_configured": true, 00:11:59.519 "data_offset": 2048, 00:11:59.519 "data_size": 63488 00:11:59.519 } 00:11:59.519 ] 00:11:59.519 }' 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.519 20:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.778 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:59.778 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:59.778 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.778 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.778 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.778 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.778 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:59.778 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.778 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.778 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.778 [2024-10-17 20:08:45.415546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:00.037 "name": "raid_bdev1", 00:12:00.037 "aliases": [ 00:12:00.037 "1435e738-da68-4c89-a550-98a78a63ff5d" 00:12:00.037 ], 00:12:00.037 "product_name": "Raid Volume", 00:12:00.037 "block_size": 512, 00:12:00.037 "num_blocks": 63488, 00:12:00.037 "uuid": "1435e738-da68-4c89-a550-98a78a63ff5d", 00:12:00.037 "assigned_rate_limits": { 00:12:00.037 "rw_ios_per_sec": 0, 00:12:00.037 "rw_mbytes_per_sec": 0, 00:12:00.037 "r_mbytes_per_sec": 0, 00:12:00.037 "w_mbytes_per_sec": 0 00:12:00.037 }, 00:12:00.037 "claimed": false, 00:12:00.037 "zoned": false, 00:12:00.037 "supported_io_types": { 00:12:00.037 "read": true, 00:12:00.037 "write": true, 00:12:00.037 "unmap": false, 00:12:00.037 "flush": false, 00:12:00.037 "reset": true, 00:12:00.037 "nvme_admin": false, 00:12:00.037 "nvme_io": false, 00:12:00.037 "nvme_io_md": false, 00:12:00.037 "write_zeroes": true, 00:12:00.037 "zcopy": false, 00:12:00.037 "get_zone_info": false, 00:12:00.037 "zone_management": false, 00:12:00.037 "zone_append": false, 00:12:00.037 "compare": false, 00:12:00.037 "compare_and_write": false, 00:12:00.037 "abort": false, 00:12:00.037 "seek_hole": false, 00:12:00.037 "seek_data": false, 00:12:00.037 "copy": false, 00:12:00.037 "nvme_iov_md": false 00:12:00.037 }, 00:12:00.037 "memory_domains": [ 00:12:00.037 { 00:12:00.037 "dma_device_id": "system", 00:12:00.037 "dma_device_type": 1 00:12:00.037 }, 00:12:00.037 { 00:12:00.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.037 "dma_device_type": 2 00:12:00.037 }, 00:12:00.037 { 00:12:00.037 "dma_device_id": "system", 00:12:00.037 "dma_device_type": 1 00:12:00.037 }, 00:12:00.037 { 00:12:00.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.037 "dma_device_type": 2 00:12:00.037 }, 00:12:00.037 { 00:12:00.037 "dma_device_id": "system", 00:12:00.037 "dma_device_type": 1 00:12:00.037 }, 00:12:00.037 { 00:12:00.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.037 "dma_device_type": 2 00:12:00.037 } 00:12:00.037 ], 00:12:00.037 "driver_specific": { 00:12:00.037 "raid": { 00:12:00.037 "uuid": "1435e738-da68-4c89-a550-98a78a63ff5d", 00:12:00.037 "strip_size_kb": 0, 00:12:00.037 "state": "online", 00:12:00.037 "raid_level": "raid1", 00:12:00.037 "superblock": true, 00:12:00.037 "num_base_bdevs": 3, 00:12:00.037 "num_base_bdevs_discovered": 3, 00:12:00.037 "num_base_bdevs_operational": 3, 00:12:00.037 "base_bdevs_list": [ 00:12:00.037 { 00:12:00.037 "name": "pt1", 00:12:00.037 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:00.037 "is_configured": true, 00:12:00.037 "data_offset": 2048, 00:12:00.037 "data_size": 63488 00:12:00.037 }, 00:12:00.037 { 00:12:00.037 "name": "pt2", 00:12:00.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.037 "is_configured": true, 00:12:00.037 "data_offset": 2048, 00:12:00.037 "data_size": 63488 00:12:00.037 }, 00:12:00.037 { 00:12:00.037 "name": "pt3", 00:12:00.037 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.037 "is_configured": true, 00:12:00.037 "data_offset": 2048, 00:12:00.037 "data_size": 63488 00:12:00.037 } 00:12:00.037 ] 00:12:00.037 } 00:12:00.037 } 00:12:00.037 }' 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:00.037 pt2 00:12:00.037 pt3' 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.037 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.038 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.297 [2024-10-17 20:08:45.755631] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1435e738-da68-4c89-a550-98a78a63ff5d '!=' 1435e738-da68-4c89-a550-98a78a63ff5d ']' 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.297 [2024-10-17 20:08:45.807429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.297 "name": "raid_bdev1", 00:12:00.297 "uuid": "1435e738-da68-4c89-a550-98a78a63ff5d", 00:12:00.297 "strip_size_kb": 0, 00:12:00.297 "state": "online", 00:12:00.297 "raid_level": "raid1", 00:12:00.297 "superblock": true, 00:12:00.297 "num_base_bdevs": 3, 00:12:00.297 "num_base_bdevs_discovered": 2, 00:12:00.297 "num_base_bdevs_operational": 2, 00:12:00.297 "base_bdevs_list": [ 00:12:00.297 { 00:12:00.297 "name": null, 00:12:00.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.297 "is_configured": false, 00:12:00.297 "data_offset": 0, 00:12:00.297 "data_size": 63488 00:12:00.297 }, 00:12:00.297 { 00:12:00.297 "name": "pt2", 00:12:00.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.297 "is_configured": true, 00:12:00.297 "data_offset": 2048, 00:12:00.297 "data_size": 63488 00:12:00.297 }, 00:12:00.297 { 00:12:00.297 "name": "pt3", 00:12:00.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.297 "is_configured": true, 00:12:00.297 "data_offset": 2048, 00:12:00.297 "data_size": 63488 00:12:00.297 } 00:12:00.297 ] 00:12:00.297 }' 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.297 20:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.865 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.865 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.865 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.865 [2024-10-17 20:08:46.351482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.865 [2024-10-17 20:08:46.351523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.865 [2024-10-17 20:08:46.351638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.865 [2024-10-17 20:08:46.351715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.865 [2024-10-17 20:08:46.351738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:00.865 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.865 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.865 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:00.865 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.865 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.865 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.865 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.866 [2024-10-17 20:08:46.439432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:00.866 [2024-10-17 20:08:46.439659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.866 [2024-10-17 20:08:46.439732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:00.866 [2024-10-17 20:08:46.439869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.866 [2024-10-17 20:08:46.442888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.866 [2024-10-17 20:08:46.443076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:00.866 [2024-10-17 20:08:46.443190] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:00.866 [2024-10-17 20:08:46.443255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:00.866 pt2 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.866 "name": "raid_bdev1", 00:12:00.866 "uuid": "1435e738-da68-4c89-a550-98a78a63ff5d", 00:12:00.866 "strip_size_kb": 0, 00:12:00.866 "state": "configuring", 00:12:00.866 "raid_level": "raid1", 00:12:00.866 "superblock": true, 00:12:00.866 "num_base_bdevs": 3, 00:12:00.866 "num_base_bdevs_discovered": 1, 00:12:00.866 "num_base_bdevs_operational": 2, 00:12:00.866 "base_bdevs_list": [ 00:12:00.866 { 00:12:00.866 "name": null, 00:12:00.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.866 "is_configured": false, 00:12:00.866 "data_offset": 2048, 00:12:00.866 "data_size": 63488 00:12:00.866 }, 00:12:00.866 { 00:12:00.866 "name": "pt2", 00:12:00.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.866 "is_configured": true, 00:12:00.866 "data_offset": 2048, 00:12:00.866 "data_size": 63488 00:12:00.866 }, 00:12:00.866 { 00:12:00.866 "name": null, 00:12:00.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.866 "is_configured": false, 00:12:00.866 "data_offset": 2048, 00:12:00.866 "data_size": 63488 00:12:00.866 } 00:12:00.866 ] 00:12:00.866 }' 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.866 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.434 [2024-10-17 20:08:46.975686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:01.434 [2024-10-17 20:08:46.975952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.434 [2024-10-17 20:08:46.976076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:01.434 [2024-10-17 20:08:46.976249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.434 [2024-10-17 20:08:46.976979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.434 [2024-10-17 20:08:46.977018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:01.434 [2024-10-17 20:08:46.977155] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:01.434 [2024-10-17 20:08:46.977202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:01.434 [2024-10-17 20:08:46.977382] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:01.434 [2024-10-17 20:08:46.977417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:01.434 [2024-10-17 20:08:46.977720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:01.434 [2024-10-17 20:08:46.977905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:01.434 [2024-10-17 20:08:46.977919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:01.434 [2024-10-17 20:08:46.978151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.434 pt3 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.434 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.435 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.435 20:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.435 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.435 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.435 20:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.435 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.435 "name": "raid_bdev1", 00:12:01.435 "uuid": "1435e738-da68-4c89-a550-98a78a63ff5d", 00:12:01.435 "strip_size_kb": 0, 00:12:01.435 "state": "online", 00:12:01.435 "raid_level": "raid1", 00:12:01.435 "superblock": true, 00:12:01.435 "num_base_bdevs": 3, 00:12:01.435 "num_base_bdevs_discovered": 2, 00:12:01.435 "num_base_bdevs_operational": 2, 00:12:01.435 "base_bdevs_list": [ 00:12:01.435 { 00:12:01.435 "name": null, 00:12:01.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.435 "is_configured": false, 00:12:01.435 "data_offset": 2048, 00:12:01.435 "data_size": 63488 00:12:01.435 }, 00:12:01.435 { 00:12:01.435 "name": "pt2", 00:12:01.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.435 "is_configured": true, 00:12:01.435 "data_offset": 2048, 00:12:01.435 "data_size": 63488 00:12:01.435 }, 00:12:01.435 { 00:12:01.435 "name": "pt3", 00:12:01.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:01.435 "is_configured": true, 00:12:01.435 "data_offset": 2048, 00:12:01.435 "data_size": 63488 00:12:01.435 } 00:12:01.435 ] 00:12:01.435 }' 00:12:01.435 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.435 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.002 [2024-10-17 20:08:47.519818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.002 [2024-10-17 20:08:47.520043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.002 [2024-10-17 20:08:47.520184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.002 [2024-10-17 20:08:47.520279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.002 [2024-10-17 20:08:47.520296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.002 [2024-10-17 20:08:47.587828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:02.002 [2024-10-17 20:08:47.588118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.002 [2024-10-17 20:08:47.588162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:02.002 [2024-10-17 20:08:47.588180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.002 [2024-10-17 20:08:47.591220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.002 [2024-10-17 20:08:47.591266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:02.002 [2024-10-17 20:08:47.591365] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:02.002 [2024-10-17 20:08:47.591449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:02.002 [2024-10-17 20:08:47.591615] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:02.002 [2024-10-17 20:08:47.591632] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.002 [2024-10-17 20:08:47.591652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:02.002 [2024-10-17 20:08:47.591750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:02.002 pt1 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.002 "name": "raid_bdev1", 00:12:02.002 "uuid": "1435e738-da68-4c89-a550-98a78a63ff5d", 00:12:02.002 "strip_size_kb": 0, 00:12:02.002 "state": "configuring", 00:12:02.002 "raid_level": "raid1", 00:12:02.002 "superblock": true, 00:12:02.002 "num_base_bdevs": 3, 00:12:02.002 "num_base_bdevs_discovered": 1, 00:12:02.002 "num_base_bdevs_operational": 2, 00:12:02.002 "base_bdevs_list": [ 00:12:02.002 { 00:12:02.002 "name": null, 00:12:02.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.002 "is_configured": false, 00:12:02.002 "data_offset": 2048, 00:12:02.002 "data_size": 63488 00:12:02.002 }, 00:12:02.002 { 00:12:02.002 "name": "pt2", 00:12:02.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.002 "is_configured": true, 00:12:02.002 "data_offset": 2048, 00:12:02.002 "data_size": 63488 00:12:02.002 }, 00:12:02.002 { 00:12:02.002 "name": null, 00:12:02.002 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.002 "is_configured": false, 00:12:02.002 "data_offset": 2048, 00:12:02.002 "data_size": 63488 00:12:02.002 } 00:12:02.002 ] 00:12:02.002 }' 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.002 20:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.570 [2024-10-17 20:08:48.168128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:02.570 [2024-10-17 20:08:48.168342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.570 [2024-10-17 20:08:48.168438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:02.570 [2024-10-17 20:08:48.168678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.570 [2024-10-17 20:08:48.169289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.570 [2024-10-17 20:08:48.169317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:02.570 [2024-10-17 20:08:48.169449] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:02.570 [2024-10-17 20:08:48.169507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:02.570 [2024-10-17 20:08:48.169662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:02.570 [2024-10-17 20:08:48.169677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:02.570 [2024-10-17 20:08:48.170037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:02.570 [2024-10-17 20:08:48.170257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:02.570 [2024-10-17 20:08:48.170279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:02.570 [2024-10-17 20:08:48.170460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.570 pt3 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.570 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.829 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.829 "name": "raid_bdev1", 00:12:02.829 "uuid": "1435e738-da68-4c89-a550-98a78a63ff5d", 00:12:02.829 "strip_size_kb": 0, 00:12:02.829 "state": "online", 00:12:02.829 "raid_level": "raid1", 00:12:02.829 "superblock": true, 00:12:02.829 "num_base_bdevs": 3, 00:12:02.829 "num_base_bdevs_discovered": 2, 00:12:02.829 "num_base_bdevs_operational": 2, 00:12:02.829 "base_bdevs_list": [ 00:12:02.829 { 00:12:02.829 "name": null, 00:12:02.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.829 "is_configured": false, 00:12:02.829 "data_offset": 2048, 00:12:02.829 "data_size": 63488 00:12:02.829 }, 00:12:02.829 { 00:12:02.829 "name": "pt2", 00:12:02.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.829 "is_configured": true, 00:12:02.829 "data_offset": 2048, 00:12:02.829 "data_size": 63488 00:12:02.829 }, 00:12:02.829 { 00:12:02.829 "name": "pt3", 00:12:02.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.829 "is_configured": true, 00:12:02.829 "data_offset": 2048, 00:12:02.829 "data_size": 63488 00:12:02.829 } 00:12:02.829 ] 00:12:02.829 }' 00:12:02.829 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.829 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.087 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:03.087 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:03.087 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.087 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.345 [2024-10-17 20:08:48.788662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1435e738-da68-4c89-a550-98a78a63ff5d '!=' 1435e738-da68-4c89-a550-98a78a63ff5d ']' 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68615 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 68615 ']' 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 68615 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68615 00:12:03.345 killing process with pid 68615 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68615' 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 68615 00:12:03.345 [2024-10-17 20:08:48.863771] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.345 20:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 68615 00:12:03.345 [2024-10-17 20:08:48.863887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.345 [2024-10-17 20:08:48.863966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.345 [2024-10-17 20:08:48.863986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:03.604 [2024-10-17 20:08:49.127985] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.539 20:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:04.539 00:12:04.539 real 0m8.757s 00:12:04.539 user 0m14.440s 00:12:04.539 sys 0m1.170s 00:12:04.539 ************************************ 00:12:04.539 END TEST raid_superblock_test 00:12:04.539 ************************************ 00:12:04.539 20:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.539 20:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.798 20:08:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:04.798 20:08:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:04.798 20:08:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.798 20:08:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.798 ************************************ 00:12:04.798 START TEST raid_read_error_test 00:12:04.798 ************************************ 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SirlQvNHRn 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69072 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69072 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 69072 ']' 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:04.798 20:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.798 [2024-10-17 20:08:50.312409] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:12:04.798 [2024-10-17 20:08:50.312571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69072 ] 00:12:05.057 [2024-10-17 20:08:50.479661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.057 [2024-10-17 20:08:50.611225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.315 [2024-10-17 20:08:50.814308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.315 [2024-10-17 20:08:50.814620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.882 BaseBdev1_malloc 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.882 true 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.882 [2024-10-17 20:08:51.369732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:05.882 [2024-10-17 20:08:51.369807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.882 [2024-10-17 20:08:51.369837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:05.882 [2024-10-17 20:08:51.369855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.882 [2024-10-17 20:08:51.372935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.882 [2024-10-17 20:08:51.373234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:05.882 BaseBdev1 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.882 BaseBdev2_malloc 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.882 true 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.882 [2024-10-17 20:08:51.433157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:05.882 [2024-10-17 20:08:51.433231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.882 [2024-10-17 20:08:51.433257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:05.882 [2024-10-17 20:08:51.433274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.882 [2024-10-17 20:08:51.436198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.882 [2024-10-17 20:08:51.436393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:05.882 BaseBdev2 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.882 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.883 BaseBdev3_malloc 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.883 true 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.883 [2024-10-17 20:08:51.502154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:05.883 [2024-10-17 20:08:51.502227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.883 [2024-10-17 20:08:51.502256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:05.883 [2024-10-17 20:08:51.502274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.883 [2024-10-17 20:08:51.505129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.883 [2024-10-17 20:08:51.505183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:05.883 BaseBdev3 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.883 [2024-10-17 20:08:51.510242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.883 [2024-10-17 20:08:51.512828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.883 [2024-10-17 20:08:51.513091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.883 [2024-10-17 20:08:51.513414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:05.883 [2024-10-17 20:08:51.513435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:05.883 [2024-10-17 20:08:51.513760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:05.883 [2024-10-17 20:08:51.514034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:05.883 [2024-10-17 20:08:51.514056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:05.883 [2024-10-17 20:08:51.514294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.883 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.141 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.141 "name": "raid_bdev1", 00:12:06.141 "uuid": "63a0ced1-94a3-4985-9669-2ccbf134fe28", 00:12:06.141 "strip_size_kb": 0, 00:12:06.141 "state": "online", 00:12:06.141 "raid_level": "raid1", 00:12:06.141 "superblock": true, 00:12:06.141 "num_base_bdevs": 3, 00:12:06.141 "num_base_bdevs_discovered": 3, 00:12:06.141 "num_base_bdevs_operational": 3, 00:12:06.141 "base_bdevs_list": [ 00:12:06.141 { 00:12:06.141 "name": "BaseBdev1", 00:12:06.141 "uuid": "9719e8e5-d492-5523-9376-75386f42b2ab", 00:12:06.141 "is_configured": true, 00:12:06.141 "data_offset": 2048, 00:12:06.141 "data_size": 63488 00:12:06.141 }, 00:12:06.141 { 00:12:06.141 "name": "BaseBdev2", 00:12:06.141 "uuid": "7c022961-f1a5-5f94-8f09-c9c58063643a", 00:12:06.141 "is_configured": true, 00:12:06.141 "data_offset": 2048, 00:12:06.141 "data_size": 63488 00:12:06.141 }, 00:12:06.141 { 00:12:06.141 "name": "BaseBdev3", 00:12:06.141 "uuid": "5c3cb6e0-40f2-5e8e-87f3-b94a1544c5ec", 00:12:06.141 "is_configured": true, 00:12:06.141 "data_offset": 2048, 00:12:06.141 "data_size": 63488 00:12:06.141 } 00:12:06.141 ] 00:12:06.141 }' 00:12:06.141 20:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.141 20:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.739 20:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:06.739 20:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:06.739 [2024-10-17 20:08:52.203968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.675 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.675 "name": "raid_bdev1", 00:12:07.675 "uuid": "63a0ced1-94a3-4985-9669-2ccbf134fe28", 00:12:07.675 "strip_size_kb": 0, 00:12:07.675 "state": "online", 00:12:07.675 "raid_level": "raid1", 00:12:07.675 "superblock": true, 00:12:07.675 "num_base_bdevs": 3, 00:12:07.675 "num_base_bdevs_discovered": 3, 00:12:07.675 "num_base_bdevs_operational": 3, 00:12:07.675 "base_bdevs_list": [ 00:12:07.675 { 00:12:07.675 "name": "BaseBdev1", 00:12:07.676 "uuid": "9719e8e5-d492-5523-9376-75386f42b2ab", 00:12:07.676 "is_configured": true, 00:12:07.676 "data_offset": 2048, 00:12:07.676 "data_size": 63488 00:12:07.676 }, 00:12:07.676 { 00:12:07.676 "name": "BaseBdev2", 00:12:07.676 "uuid": "7c022961-f1a5-5f94-8f09-c9c58063643a", 00:12:07.676 "is_configured": true, 00:12:07.676 "data_offset": 2048, 00:12:07.676 "data_size": 63488 00:12:07.676 }, 00:12:07.676 { 00:12:07.676 "name": "BaseBdev3", 00:12:07.676 "uuid": "5c3cb6e0-40f2-5e8e-87f3-b94a1544c5ec", 00:12:07.676 "is_configured": true, 00:12:07.676 "data_offset": 2048, 00:12:07.676 "data_size": 63488 00:12:07.676 } 00:12:07.676 ] 00:12:07.676 }' 00:12:07.676 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.676 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.242 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:08.242 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.242 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.242 [2024-10-17 20:08:53.611645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.243 [2024-10-17 20:08:53.611683] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.243 [2024-10-17 20:08:53.615214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.243 [2024-10-17 20:08:53.615279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.243 [2024-10-17 20:08:53.615428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.243 [2024-10-17 20:08:53.615444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:08.243 { 00:12:08.243 "results": [ 00:12:08.243 { 00:12:08.243 "job": "raid_bdev1", 00:12:08.243 "core_mask": "0x1", 00:12:08.243 "workload": "randrw", 00:12:08.243 "percentage": 50, 00:12:08.243 "status": "finished", 00:12:08.243 "queue_depth": 1, 00:12:08.243 "io_size": 131072, 00:12:08.243 "runtime": 1.404693, 00:12:08.243 "iops": 8935.760340515686, 00:12:08.243 "mibps": 1116.9700425644608, 00:12:08.243 "io_failed": 0, 00:12:08.243 "io_timeout": 0, 00:12:08.243 "avg_latency_us": 107.5703296830639, 00:12:08.243 "min_latency_us": 42.589090909090906, 00:12:08.243 "max_latency_us": 2010.7636363636364 00:12:08.243 } 00:12:08.243 ], 00:12:08.243 "core_count": 1 00:12:08.243 } 00:12:08.243 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.243 20:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69072 00:12:08.243 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 69072 ']' 00:12:08.243 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 69072 00:12:08.243 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:08.243 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:08.243 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69072 00:12:08.243 killing process with pid 69072 00:12:08.243 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:08.243 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:08.243 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69072' 00:12:08.243 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 69072 00:12:08.243 20:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 69072 00:12:08.243 [2024-10-17 20:08:53.651696] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:08.243 [2024-10-17 20:08:53.865677] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.620 20:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SirlQvNHRn 00:12:09.620 20:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:09.620 20:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:09.620 20:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:09.620 20:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:09.620 20:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:09.620 20:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:09.620 20:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:09.620 00:12:09.620 real 0m4.779s 00:12:09.620 user 0m5.996s 00:12:09.620 sys 0m0.556s 00:12:09.620 20:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.620 20:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.620 ************************************ 00:12:09.620 END TEST raid_read_error_test 00:12:09.620 ************************************ 00:12:09.620 20:08:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:09.620 20:08:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:09.620 20:08:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.620 20:08:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.620 ************************************ 00:12:09.620 START TEST raid_write_error_test 00:12:09.620 ************************************ 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:09.620 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:09.621 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kXlASsEbFW 00:12:09.621 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69222 00:12:09.621 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69222 00:12:09.621 20:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:09.621 20:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69222 ']' 00:12:09.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.621 20:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.621 20:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.621 20:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.621 20:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.621 20:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.621 [2024-10-17 20:08:55.147614] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:12:09.621 [2024-10-17 20:08:55.147760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69222 ] 00:12:09.879 [2024-10-17 20:08:55.311659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.880 [2024-10-17 20:08:55.447913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.139 [2024-10-17 20:08:55.655935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.139 [2024-10-17 20:08:55.656242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.707 BaseBdev1_malloc 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.707 true 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.707 [2024-10-17 20:08:56.238777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:10.707 [2024-10-17 20:08:56.238849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.707 [2024-10-17 20:08:56.238880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:10.707 [2024-10-17 20:08:56.238898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.707 [2024-10-17 20:08:56.242115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.707 [2024-10-17 20:08:56.242294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.707 BaseBdev1 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.707 BaseBdev2_malloc 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.707 true 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.707 [2024-10-17 20:08:56.296179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:10.707 [2024-10-17 20:08:56.296252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.707 [2024-10-17 20:08:56.296280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:10.707 [2024-10-17 20:08:56.296297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.707 [2024-10-17 20:08:56.299247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.707 [2024-10-17 20:08:56.299298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:10.707 BaseBdev2 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.707 BaseBdev3_malloc 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.707 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.998 true 00:12:10.998 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.998 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.999 [2024-10-17 20:08:56.370548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:10.999 [2024-10-17 20:08:56.370781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.999 [2024-10-17 20:08:56.370927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:10.999 [2024-10-17 20:08:56.370960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.999 [2024-10-17 20:08:56.373921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.999 [2024-10-17 20:08:56.373989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:10.999 BaseBdev3 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.999 [2024-10-17 20:08:56.378705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.999 [2024-10-17 20:08:56.381441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.999 [2024-10-17 20:08:56.381707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.999 [2024-10-17 20:08:56.382137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:10.999 [2024-10-17 20:08:56.382278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.999 [2024-10-17 20:08:56.382698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:10.999 [2024-10-17 20:08:56.383078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:10.999 [2024-10-17 20:08:56.383216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:10.999 [2024-10-17 20:08:56.383563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.999 "name": "raid_bdev1", 00:12:10.999 "uuid": "16c613d6-e5ce-49ac-a175-a5a35d1a2a41", 00:12:10.999 "strip_size_kb": 0, 00:12:10.999 "state": "online", 00:12:10.999 "raid_level": "raid1", 00:12:10.999 "superblock": true, 00:12:10.999 "num_base_bdevs": 3, 00:12:10.999 "num_base_bdevs_discovered": 3, 00:12:10.999 "num_base_bdevs_operational": 3, 00:12:10.999 "base_bdevs_list": [ 00:12:10.999 { 00:12:10.999 "name": "BaseBdev1", 00:12:10.999 "uuid": "394a1f1b-628e-5c6f-9603-1d5afec6c51c", 00:12:10.999 "is_configured": true, 00:12:10.999 "data_offset": 2048, 00:12:10.999 "data_size": 63488 00:12:10.999 }, 00:12:10.999 { 00:12:10.999 "name": "BaseBdev2", 00:12:10.999 "uuid": "b7410c98-9bc7-5af3-9118-57636bcc0181", 00:12:10.999 "is_configured": true, 00:12:10.999 "data_offset": 2048, 00:12:10.999 "data_size": 63488 00:12:10.999 }, 00:12:10.999 { 00:12:10.999 "name": "BaseBdev3", 00:12:10.999 "uuid": "58abb58e-dd7e-5208-86ce-f9a96d6b074d", 00:12:10.999 "is_configured": true, 00:12:10.999 "data_offset": 2048, 00:12:10.999 "data_size": 63488 00:12:10.999 } 00:12:10.999 ] 00:12:10.999 }' 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.999 20:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.574 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:11.574 20:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:11.574 [2024-10-17 20:08:57.049259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.507 [2024-10-17 20:08:57.926508] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:12.507 [2024-10-17 20:08:57.926589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.507 [2024-10-17 20:08:57.926853] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.507 "name": "raid_bdev1", 00:12:12.507 "uuid": "16c613d6-e5ce-49ac-a175-a5a35d1a2a41", 00:12:12.507 "strip_size_kb": 0, 00:12:12.507 "state": "online", 00:12:12.507 "raid_level": "raid1", 00:12:12.507 "superblock": true, 00:12:12.507 "num_base_bdevs": 3, 00:12:12.507 "num_base_bdevs_discovered": 2, 00:12:12.507 "num_base_bdevs_operational": 2, 00:12:12.507 "base_bdevs_list": [ 00:12:12.507 { 00:12:12.507 "name": null, 00:12:12.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.507 "is_configured": false, 00:12:12.507 "data_offset": 0, 00:12:12.507 "data_size": 63488 00:12:12.507 }, 00:12:12.507 { 00:12:12.507 "name": "BaseBdev2", 00:12:12.507 "uuid": "b7410c98-9bc7-5af3-9118-57636bcc0181", 00:12:12.507 "is_configured": true, 00:12:12.507 "data_offset": 2048, 00:12:12.507 "data_size": 63488 00:12:12.507 }, 00:12:12.507 { 00:12:12.507 "name": "BaseBdev3", 00:12:12.507 "uuid": "58abb58e-dd7e-5208-86ce-f9a96d6b074d", 00:12:12.507 "is_configured": true, 00:12:12.507 "data_offset": 2048, 00:12:12.507 "data_size": 63488 00:12:12.507 } 00:12:12.507 ] 00:12:12.507 }' 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.507 20:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.073 [2024-10-17 20:08:58.467993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.073 [2024-10-17 20:08:58.468216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.073 [2024-10-17 20:08:58.471808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.073 { 00:12:13.073 "results": [ 00:12:13.073 { 00:12:13.073 "job": "raid_bdev1", 00:12:13.073 "core_mask": "0x1", 00:12:13.073 "workload": "randrw", 00:12:13.073 "percentage": 50, 00:12:13.073 "status": "finished", 00:12:13.073 "queue_depth": 1, 00:12:13.073 "io_size": 131072, 00:12:13.073 "runtime": 1.416394, 00:12:13.073 "iops": 9893.433606750665, 00:12:13.073 "mibps": 1236.679200843833, 00:12:13.073 "io_failed": 0, 00:12:13.073 "io_timeout": 0, 00:12:13.073 "avg_latency_us": 96.732643324705, 00:12:13.073 "min_latency_us": 42.82181818181818, 00:12:13.073 "max_latency_us": 1846.9236363636364 00:12:13.073 } 00:12:13.073 ], 00:12:13.073 "core_count": 1 00:12:13.073 } 00:12:13.073 [2024-10-17 20:08:58.472040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.073 [2024-10-17 20:08:58.472243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.073 [2024-10-17 20:08:58.472274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69222 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69222 ']' 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69222 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69222 00:12:13.073 killing process with pid 69222 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69222' 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69222 00:12:13.073 [2024-10-17 20:08:58.512662] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:13.073 20:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69222 00:12:13.331 [2024-10-17 20:08:58.728726] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.264 20:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kXlASsEbFW 00:12:14.264 20:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:14.264 20:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:14.264 20:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:14.264 20:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:14.264 20:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:14.264 20:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:14.264 ************************************ 00:12:14.264 END TEST raid_write_error_test 00:12:14.264 ************************************ 00:12:14.264 20:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:14.264 00:12:14.264 real 0m4.807s 00:12:14.264 user 0m6.011s 00:12:14.264 sys 0m0.590s 00:12:14.264 20:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.264 20:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.264 20:08:59 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:14.264 20:08:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:14.264 20:08:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:14.264 20:08:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:14.264 20:08:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.264 20:08:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.264 ************************************ 00:12:14.264 START TEST raid_state_function_test 00:12:14.264 ************************************ 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:14.264 Process raid pid: 69361 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69361 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69361' 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69361 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69361 ']' 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:14.264 20:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.522 [2024-10-17 20:09:00.004969] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:12:14.522 [2024-10-17 20:09:00.005395] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.522 [2024-10-17 20:09:00.172539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.781 [2024-10-17 20:09:00.311688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.039 [2024-10-17 20:09:00.524269] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.039 [2024-10-17 20:09:00.524329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.605 20:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.605 20:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:15.605 20:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:15.605 20:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.605 20:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.605 [2024-10-17 20:09:00.999060] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.605 [2024-10-17 20:09:00.999284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.605 [2024-10-17 20:09:00.999460] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.605 [2024-10-17 20:09:00.999497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.605 [2024-10-17 20:09:00.999511] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:15.605 [2024-10-17 20:09:00.999528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:15.605 [2024-10-17 20:09:00.999538] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:15.605 [2024-10-17 20:09:00.999553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.605 "name": "Existed_Raid", 00:12:15.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.605 "strip_size_kb": 64, 00:12:15.605 "state": "configuring", 00:12:15.605 "raid_level": "raid0", 00:12:15.605 "superblock": false, 00:12:15.605 "num_base_bdevs": 4, 00:12:15.605 "num_base_bdevs_discovered": 0, 00:12:15.605 "num_base_bdevs_operational": 4, 00:12:15.605 "base_bdevs_list": [ 00:12:15.605 { 00:12:15.605 "name": "BaseBdev1", 00:12:15.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.605 "is_configured": false, 00:12:15.605 "data_offset": 0, 00:12:15.605 "data_size": 0 00:12:15.605 }, 00:12:15.605 { 00:12:15.605 "name": "BaseBdev2", 00:12:15.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.605 "is_configured": false, 00:12:15.605 "data_offset": 0, 00:12:15.605 "data_size": 0 00:12:15.605 }, 00:12:15.605 { 00:12:15.605 "name": "BaseBdev3", 00:12:15.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.605 "is_configured": false, 00:12:15.605 "data_offset": 0, 00:12:15.605 "data_size": 0 00:12:15.605 }, 00:12:15.605 { 00:12:15.605 "name": "BaseBdev4", 00:12:15.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.605 "is_configured": false, 00:12:15.605 "data_offset": 0, 00:12:15.605 "data_size": 0 00:12:15.605 } 00:12:15.605 ] 00:12:15.605 }' 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.605 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.227 [2024-10-17 20:09:01.543119] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:16.227 [2024-10-17 20:09:01.543170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.227 [2024-10-17 20:09:01.551119] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:16.227 [2024-10-17 20:09:01.551303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:16.227 [2024-10-17 20:09:01.551438] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.227 [2024-10-17 20:09:01.551501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.227 [2024-10-17 20:09:01.551618] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:16.227 [2024-10-17 20:09:01.551679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:16.227 [2024-10-17 20:09:01.551861] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:16.227 [2024-10-17 20:09:01.551924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.227 [2024-10-17 20:09:01.597052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.227 BaseBdev1 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.227 [ 00:12:16.227 { 00:12:16.227 "name": "BaseBdev1", 00:12:16.227 "aliases": [ 00:12:16.227 "d4d528eb-fefb-4556-8636-632b74b17566" 00:12:16.227 ], 00:12:16.227 "product_name": "Malloc disk", 00:12:16.227 "block_size": 512, 00:12:16.227 "num_blocks": 65536, 00:12:16.227 "uuid": "d4d528eb-fefb-4556-8636-632b74b17566", 00:12:16.227 "assigned_rate_limits": { 00:12:16.227 "rw_ios_per_sec": 0, 00:12:16.227 "rw_mbytes_per_sec": 0, 00:12:16.227 "r_mbytes_per_sec": 0, 00:12:16.227 "w_mbytes_per_sec": 0 00:12:16.227 }, 00:12:16.227 "claimed": true, 00:12:16.227 "claim_type": "exclusive_write", 00:12:16.227 "zoned": false, 00:12:16.227 "supported_io_types": { 00:12:16.227 "read": true, 00:12:16.227 "write": true, 00:12:16.227 "unmap": true, 00:12:16.227 "flush": true, 00:12:16.227 "reset": true, 00:12:16.227 "nvme_admin": false, 00:12:16.227 "nvme_io": false, 00:12:16.227 "nvme_io_md": false, 00:12:16.227 "write_zeroes": true, 00:12:16.227 "zcopy": true, 00:12:16.227 "get_zone_info": false, 00:12:16.227 "zone_management": false, 00:12:16.227 "zone_append": false, 00:12:16.227 "compare": false, 00:12:16.227 "compare_and_write": false, 00:12:16.227 "abort": true, 00:12:16.227 "seek_hole": false, 00:12:16.227 "seek_data": false, 00:12:16.227 "copy": true, 00:12:16.227 "nvme_iov_md": false 00:12:16.227 }, 00:12:16.227 "memory_domains": [ 00:12:16.227 { 00:12:16.227 "dma_device_id": "system", 00:12:16.227 "dma_device_type": 1 00:12:16.227 }, 00:12:16.227 { 00:12:16.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.227 "dma_device_type": 2 00:12:16.227 } 00:12:16.227 ], 00:12:16.227 "driver_specific": {} 00:12:16.227 } 00:12:16.227 ] 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.227 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.228 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.228 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.228 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.228 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.228 "name": "Existed_Raid", 00:12:16.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.228 "strip_size_kb": 64, 00:12:16.228 "state": "configuring", 00:12:16.228 "raid_level": "raid0", 00:12:16.228 "superblock": false, 00:12:16.228 "num_base_bdevs": 4, 00:12:16.228 "num_base_bdevs_discovered": 1, 00:12:16.228 "num_base_bdevs_operational": 4, 00:12:16.228 "base_bdevs_list": [ 00:12:16.228 { 00:12:16.228 "name": "BaseBdev1", 00:12:16.228 "uuid": "d4d528eb-fefb-4556-8636-632b74b17566", 00:12:16.228 "is_configured": true, 00:12:16.228 "data_offset": 0, 00:12:16.228 "data_size": 65536 00:12:16.228 }, 00:12:16.228 { 00:12:16.228 "name": "BaseBdev2", 00:12:16.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.228 "is_configured": false, 00:12:16.228 "data_offset": 0, 00:12:16.228 "data_size": 0 00:12:16.228 }, 00:12:16.228 { 00:12:16.228 "name": "BaseBdev3", 00:12:16.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.228 "is_configured": false, 00:12:16.228 "data_offset": 0, 00:12:16.228 "data_size": 0 00:12:16.228 }, 00:12:16.228 { 00:12:16.228 "name": "BaseBdev4", 00:12:16.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.228 "is_configured": false, 00:12:16.228 "data_offset": 0, 00:12:16.228 "data_size": 0 00:12:16.228 } 00:12:16.228 ] 00:12:16.228 }' 00:12:16.228 20:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.228 20:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.794 [2024-10-17 20:09:02.145779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:16.794 [2024-10-17 20:09:02.145850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.794 [2024-10-17 20:09:02.153822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.794 [2024-10-17 20:09:02.156409] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.794 [2024-10-17 20:09:02.156619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.794 [2024-10-17 20:09:02.156752] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:16.794 [2024-10-17 20:09:02.156790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:16.794 [2024-10-17 20:09:02.156804] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:16.794 [2024-10-17 20:09:02.156819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.794 "name": "Existed_Raid", 00:12:16.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.794 "strip_size_kb": 64, 00:12:16.794 "state": "configuring", 00:12:16.794 "raid_level": "raid0", 00:12:16.794 "superblock": false, 00:12:16.794 "num_base_bdevs": 4, 00:12:16.794 "num_base_bdevs_discovered": 1, 00:12:16.794 "num_base_bdevs_operational": 4, 00:12:16.794 "base_bdevs_list": [ 00:12:16.794 { 00:12:16.794 "name": "BaseBdev1", 00:12:16.794 "uuid": "d4d528eb-fefb-4556-8636-632b74b17566", 00:12:16.794 "is_configured": true, 00:12:16.794 "data_offset": 0, 00:12:16.794 "data_size": 65536 00:12:16.794 }, 00:12:16.794 { 00:12:16.794 "name": "BaseBdev2", 00:12:16.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.794 "is_configured": false, 00:12:16.794 "data_offset": 0, 00:12:16.794 "data_size": 0 00:12:16.794 }, 00:12:16.794 { 00:12:16.794 "name": "BaseBdev3", 00:12:16.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.794 "is_configured": false, 00:12:16.794 "data_offset": 0, 00:12:16.794 "data_size": 0 00:12:16.794 }, 00:12:16.794 { 00:12:16.794 "name": "BaseBdev4", 00:12:16.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.794 "is_configured": false, 00:12:16.794 "data_offset": 0, 00:12:16.794 "data_size": 0 00:12:16.794 } 00:12:16.794 ] 00:12:16.794 }' 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.794 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.053 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:17.053 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.053 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 [2024-10-17 20:09:02.724795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.311 BaseBdev2 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 [ 00:12:17.311 { 00:12:17.311 "name": "BaseBdev2", 00:12:17.311 "aliases": [ 00:12:17.311 "9c5f2afe-6e96-4ea8-bf94-cbb9d05a330c" 00:12:17.311 ], 00:12:17.311 "product_name": "Malloc disk", 00:12:17.311 "block_size": 512, 00:12:17.311 "num_blocks": 65536, 00:12:17.311 "uuid": "9c5f2afe-6e96-4ea8-bf94-cbb9d05a330c", 00:12:17.311 "assigned_rate_limits": { 00:12:17.311 "rw_ios_per_sec": 0, 00:12:17.311 "rw_mbytes_per_sec": 0, 00:12:17.311 "r_mbytes_per_sec": 0, 00:12:17.311 "w_mbytes_per_sec": 0 00:12:17.311 }, 00:12:17.311 "claimed": true, 00:12:17.311 "claim_type": "exclusive_write", 00:12:17.311 "zoned": false, 00:12:17.311 "supported_io_types": { 00:12:17.311 "read": true, 00:12:17.311 "write": true, 00:12:17.311 "unmap": true, 00:12:17.311 "flush": true, 00:12:17.311 "reset": true, 00:12:17.311 "nvme_admin": false, 00:12:17.311 "nvme_io": false, 00:12:17.311 "nvme_io_md": false, 00:12:17.311 "write_zeroes": true, 00:12:17.311 "zcopy": true, 00:12:17.311 "get_zone_info": false, 00:12:17.311 "zone_management": false, 00:12:17.311 "zone_append": false, 00:12:17.311 "compare": false, 00:12:17.311 "compare_and_write": false, 00:12:17.311 "abort": true, 00:12:17.311 "seek_hole": false, 00:12:17.311 "seek_data": false, 00:12:17.311 "copy": true, 00:12:17.311 "nvme_iov_md": false 00:12:17.311 }, 00:12:17.311 "memory_domains": [ 00:12:17.311 { 00:12:17.311 "dma_device_id": "system", 00:12:17.311 "dma_device_type": 1 00:12:17.311 }, 00:12:17.311 { 00:12:17.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.311 "dma_device_type": 2 00:12:17.311 } 00:12:17.311 ], 00:12:17.311 "driver_specific": {} 00:12:17.311 } 00:12:17.311 ] 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.311 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.311 "name": "Existed_Raid", 00:12:17.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.311 "strip_size_kb": 64, 00:12:17.311 "state": "configuring", 00:12:17.311 "raid_level": "raid0", 00:12:17.311 "superblock": false, 00:12:17.311 "num_base_bdevs": 4, 00:12:17.311 "num_base_bdevs_discovered": 2, 00:12:17.311 "num_base_bdevs_operational": 4, 00:12:17.311 "base_bdevs_list": [ 00:12:17.311 { 00:12:17.311 "name": "BaseBdev1", 00:12:17.312 "uuid": "d4d528eb-fefb-4556-8636-632b74b17566", 00:12:17.312 "is_configured": true, 00:12:17.312 "data_offset": 0, 00:12:17.312 "data_size": 65536 00:12:17.312 }, 00:12:17.312 { 00:12:17.312 "name": "BaseBdev2", 00:12:17.312 "uuid": "9c5f2afe-6e96-4ea8-bf94-cbb9d05a330c", 00:12:17.312 "is_configured": true, 00:12:17.312 "data_offset": 0, 00:12:17.312 "data_size": 65536 00:12:17.312 }, 00:12:17.312 { 00:12:17.312 "name": "BaseBdev3", 00:12:17.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.312 "is_configured": false, 00:12:17.312 "data_offset": 0, 00:12:17.312 "data_size": 0 00:12:17.312 }, 00:12:17.312 { 00:12:17.312 "name": "BaseBdev4", 00:12:17.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.312 "is_configured": false, 00:12:17.312 "data_offset": 0, 00:12:17.312 "data_size": 0 00:12:17.312 } 00:12:17.312 ] 00:12:17.312 }' 00:12:17.312 20:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.312 20:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.879 [2024-10-17 20:09:03.359960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.879 BaseBdev3 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.879 [ 00:12:17.879 { 00:12:17.879 "name": "BaseBdev3", 00:12:17.879 "aliases": [ 00:12:17.879 "2184b014-497e-4a55-a037-f8be7eae695d" 00:12:17.879 ], 00:12:17.879 "product_name": "Malloc disk", 00:12:17.879 "block_size": 512, 00:12:17.879 "num_blocks": 65536, 00:12:17.879 "uuid": "2184b014-497e-4a55-a037-f8be7eae695d", 00:12:17.879 "assigned_rate_limits": { 00:12:17.879 "rw_ios_per_sec": 0, 00:12:17.879 "rw_mbytes_per_sec": 0, 00:12:17.879 "r_mbytes_per_sec": 0, 00:12:17.879 "w_mbytes_per_sec": 0 00:12:17.879 }, 00:12:17.879 "claimed": true, 00:12:17.879 "claim_type": "exclusive_write", 00:12:17.879 "zoned": false, 00:12:17.879 "supported_io_types": { 00:12:17.879 "read": true, 00:12:17.879 "write": true, 00:12:17.879 "unmap": true, 00:12:17.879 "flush": true, 00:12:17.879 "reset": true, 00:12:17.879 "nvme_admin": false, 00:12:17.879 "nvme_io": false, 00:12:17.879 "nvme_io_md": false, 00:12:17.879 "write_zeroes": true, 00:12:17.879 "zcopy": true, 00:12:17.879 "get_zone_info": false, 00:12:17.879 "zone_management": false, 00:12:17.879 "zone_append": false, 00:12:17.879 "compare": false, 00:12:17.879 "compare_and_write": false, 00:12:17.879 "abort": true, 00:12:17.879 "seek_hole": false, 00:12:17.879 "seek_data": false, 00:12:17.879 "copy": true, 00:12:17.879 "nvme_iov_md": false 00:12:17.879 }, 00:12:17.879 "memory_domains": [ 00:12:17.879 { 00:12:17.879 "dma_device_id": "system", 00:12:17.879 "dma_device_type": 1 00:12:17.879 }, 00:12:17.879 { 00:12:17.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.879 "dma_device_type": 2 00:12:17.879 } 00:12:17.879 ], 00:12:17.879 "driver_specific": {} 00:12:17.879 } 00:12:17.879 ] 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.879 "name": "Existed_Raid", 00:12:17.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.879 "strip_size_kb": 64, 00:12:17.879 "state": "configuring", 00:12:17.879 "raid_level": "raid0", 00:12:17.879 "superblock": false, 00:12:17.879 "num_base_bdevs": 4, 00:12:17.879 "num_base_bdevs_discovered": 3, 00:12:17.879 "num_base_bdevs_operational": 4, 00:12:17.879 "base_bdevs_list": [ 00:12:17.879 { 00:12:17.879 "name": "BaseBdev1", 00:12:17.879 "uuid": "d4d528eb-fefb-4556-8636-632b74b17566", 00:12:17.879 "is_configured": true, 00:12:17.879 "data_offset": 0, 00:12:17.879 "data_size": 65536 00:12:17.879 }, 00:12:17.879 { 00:12:17.879 "name": "BaseBdev2", 00:12:17.879 "uuid": "9c5f2afe-6e96-4ea8-bf94-cbb9d05a330c", 00:12:17.879 "is_configured": true, 00:12:17.879 "data_offset": 0, 00:12:17.879 "data_size": 65536 00:12:17.879 }, 00:12:17.879 { 00:12:17.879 "name": "BaseBdev3", 00:12:17.879 "uuid": "2184b014-497e-4a55-a037-f8be7eae695d", 00:12:17.879 "is_configured": true, 00:12:17.879 "data_offset": 0, 00:12:17.879 "data_size": 65536 00:12:17.879 }, 00:12:17.879 { 00:12:17.879 "name": "BaseBdev4", 00:12:17.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.879 "is_configured": false, 00:12:17.879 "data_offset": 0, 00:12:17.879 "data_size": 0 00:12:17.879 } 00:12:17.879 ] 00:12:17.879 }' 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.879 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.446 [2024-10-17 20:09:03.974844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:18.446 [2024-10-17 20:09:03.974899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:18.446 [2024-10-17 20:09:03.974913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:18.446 BaseBdev4 00:12:18.446 [2024-10-17 20:09:03.975356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:18.446 [2024-10-17 20:09:03.975598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:18.446 [2024-10-17 20:09:03.975622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:18.446 [2024-10-17 20:09:03.975982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.446 20:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.446 [ 00:12:18.446 { 00:12:18.446 "name": "BaseBdev4", 00:12:18.446 "aliases": [ 00:12:18.446 "58a73a78-9126-4661-a277-c9fbc4d512e5" 00:12:18.446 ], 00:12:18.446 "product_name": "Malloc disk", 00:12:18.446 "block_size": 512, 00:12:18.446 "num_blocks": 65536, 00:12:18.446 "uuid": "58a73a78-9126-4661-a277-c9fbc4d512e5", 00:12:18.446 "assigned_rate_limits": { 00:12:18.446 "rw_ios_per_sec": 0, 00:12:18.446 "rw_mbytes_per_sec": 0, 00:12:18.446 "r_mbytes_per_sec": 0, 00:12:18.446 "w_mbytes_per_sec": 0 00:12:18.446 }, 00:12:18.446 "claimed": true, 00:12:18.446 "claim_type": "exclusive_write", 00:12:18.446 "zoned": false, 00:12:18.446 "supported_io_types": { 00:12:18.446 "read": true, 00:12:18.446 "write": true, 00:12:18.446 "unmap": true, 00:12:18.446 "flush": true, 00:12:18.446 "reset": true, 00:12:18.446 "nvme_admin": false, 00:12:18.446 "nvme_io": false, 00:12:18.446 "nvme_io_md": false, 00:12:18.446 "write_zeroes": true, 00:12:18.446 "zcopy": true, 00:12:18.446 "get_zone_info": false, 00:12:18.446 "zone_management": false, 00:12:18.446 "zone_append": false, 00:12:18.446 "compare": false, 00:12:18.446 "compare_and_write": false, 00:12:18.446 "abort": true, 00:12:18.446 "seek_hole": false, 00:12:18.446 "seek_data": false, 00:12:18.446 "copy": true, 00:12:18.446 "nvme_iov_md": false 00:12:18.446 }, 00:12:18.446 "memory_domains": [ 00:12:18.446 { 00:12:18.446 "dma_device_id": "system", 00:12:18.446 "dma_device_type": 1 00:12:18.446 }, 00:12:18.446 { 00:12:18.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.446 "dma_device_type": 2 00:12:18.446 } 00:12:18.446 ], 00:12:18.446 "driver_specific": {} 00:12:18.446 } 00:12:18.446 ] 00:12:18.446 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.446 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:18.446 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:18.446 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.446 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:18.446 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.447 "name": "Existed_Raid", 00:12:18.447 "uuid": "bf85cd9d-736a-4921-87b2-9aaa345e900d", 00:12:18.447 "strip_size_kb": 64, 00:12:18.447 "state": "online", 00:12:18.447 "raid_level": "raid0", 00:12:18.447 "superblock": false, 00:12:18.447 "num_base_bdevs": 4, 00:12:18.447 "num_base_bdevs_discovered": 4, 00:12:18.447 "num_base_bdevs_operational": 4, 00:12:18.447 "base_bdevs_list": [ 00:12:18.447 { 00:12:18.447 "name": "BaseBdev1", 00:12:18.447 "uuid": "d4d528eb-fefb-4556-8636-632b74b17566", 00:12:18.447 "is_configured": true, 00:12:18.447 "data_offset": 0, 00:12:18.447 "data_size": 65536 00:12:18.447 }, 00:12:18.447 { 00:12:18.447 "name": "BaseBdev2", 00:12:18.447 "uuid": "9c5f2afe-6e96-4ea8-bf94-cbb9d05a330c", 00:12:18.447 "is_configured": true, 00:12:18.447 "data_offset": 0, 00:12:18.447 "data_size": 65536 00:12:18.447 }, 00:12:18.447 { 00:12:18.447 "name": "BaseBdev3", 00:12:18.447 "uuid": "2184b014-497e-4a55-a037-f8be7eae695d", 00:12:18.447 "is_configured": true, 00:12:18.447 "data_offset": 0, 00:12:18.447 "data_size": 65536 00:12:18.447 }, 00:12:18.447 { 00:12:18.447 "name": "BaseBdev4", 00:12:18.447 "uuid": "58a73a78-9126-4661-a277-c9fbc4d512e5", 00:12:18.447 "is_configured": true, 00:12:18.447 "data_offset": 0, 00:12:18.447 "data_size": 65536 00:12:18.447 } 00:12:18.447 ] 00:12:18.447 }' 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.447 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.013 [2024-10-17 20:09:04.523586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:19.013 "name": "Existed_Raid", 00:12:19.013 "aliases": [ 00:12:19.013 "bf85cd9d-736a-4921-87b2-9aaa345e900d" 00:12:19.013 ], 00:12:19.013 "product_name": "Raid Volume", 00:12:19.013 "block_size": 512, 00:12:19.013 "num_blocks": 262144, 00:12:19.013 "uuid": "bf85cd9d-736a-4921-87b2-9aaa345e900d", 00:12:19.013 "assigned_rate_limits": { 00:12:19.013 "rw_ios_per_sec": 0, 00:12:19.013 "rw_mbytes_per_sec": 0, 00:12:19.013 "r_mbytes_per_sec": 0, 00:12:19.013 "w_mbytes_per_sec": 0 00:12:19.013 }, 00:12:19.013 "claimed": false, 00:12:19.013 "zoned": false, 00:12:19.013 "supported_io_types": { 00:12:19.013 "read": true, 00:12:19.013 "write": true, 00:12:19.013 "unmap": true, 00:12:19.013 "flush": true, 00:12:19.013 "reset": true, 00:12:19.013 "nvme_admin": false, 00:12:19.013 "nvme_io": false, 00:12:19.013 "nvme_io_md": false, 00:12:19.013 "write_zeroes": true, 00:12:19.013 "zcopy": false, 00:12:19.013 "get_zone_info": false, 00:12:19.013 "zone_management": false, 00:12:19.013 "zone_append": false, 00:12:19.013 "compare": false, 00:12:19.013 "compare_and_write": false, 00:12:19.013 "abort": false, 00:12:19.013 "seek_hole": false, 00:12:19.013 "seek_data": false, 00:12:19.013 "copy": false, 00:12:19.013 "nvme_iov_md": false 00:12:19.013 }, 00:12:19.013 "memory_domains": [ 00:12:19.013 { 00:12:19.013 "dma_device_id": "system", 00:12:19.013 "dma_device_type": 1 00:12:19.013 }, 00:12:19.013 { 00:12:19.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.013 "dma_device_type": 2 00:12:19.013 }, 00:12:19.013 { 00:12:19.013 "dma_device_id": "system", 00:12:19.013 "dma_device_type": 1 00:12:19.013 }, 00:12:19.013 { 00:12:19.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.013 "dma_device_type": 2 00:12:19.013 }, 00:12:19.013 { 00:12:19.013 "dma_device_id": "system", 00:12:19.013 "dma_device_type": 1 00:12:19.013 }, 00:12:19.013 { 00:12:19.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.013 "dma_device_type": 2 00:12:19.013 }, 00:12:19.013 { 00:12:19.013 "dma_device_id": "system", 00:12:19.013 "dma_device_type": 1 00:12:19.013 }, 00:12:19.013 { 00:12:19.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.013 "dma_device_type": 2 00:12:19.013 } 00:12:19.013 ], 00:12:19.013 "driver_specific": { 00:12:19.013 "raid": { 00:12:19.013 "uuid": "bf85cd9d-736a-4921-87b2-9aaa345e900d", 00:12:19.013 "strip_size_kb": 64, 00:12:19.013 "state": "online", 00:12:19.013 "raid_level": "raid0", 00:12:19.013 "superblock": false, 00:12:19.013 "num_base_bdevs": 4, 00:12:19.013 "num_base_bdevs_discovered": 4, 00:12:19.013 "num_base_bdevs_operational": 4, 00:12:19.013 "base_bdevs_list": [ 00:12:19.013 { 00:12:19.013 "name": "BaseBdev1", 00:12:19.013 "uuid": "d4d528eb-fefb-4556-8636-632b74b17566", 00:12:19.013 "is_configured": true, 00:12:19.013 "data_offset": 0, 00:12:19.013 "data_size": 65536 00:12:19.013 }, 00:12:19.013 { 00:12:19.013 "name": "BaseBdev2", 00:12:19.013 "uuid": "9c5f2afe-6e96-4ea8-bf94-cbb9d05a330c", 00:12:19.013 "is_configured": true, 00:12:19.013 "data_offset": 0, 00:12:19.013 "data_size": 65536 00:12:19.013 }, 00:12:19.013 { 00:12:19.013 "name": "BaseBdev3", 00:12:19.013 "uuid": "2184b014-497e-4a55-a037-f8be7eae695d", 00:12:19.013 "is_configured": true, 00:12:19.013 "data_offset": 0, 00:12:19.013 "data_size": 65536 00:12:19.013 }, 00:12:19.013 { 00:12:19.013 "name": "BaseBdev4", 00:12:19.013 "uuid": "58a73a78-9126-4661-a277-c9fbc4d512e5", 00:12:19.013 "is_configured": true, 00:12:19.013 "data_offset": 0, 00:12:19.013 "data_size": 65536 00:12:19.013 } 00:12:19.013 ] 00:12:19.013 } 00:12:19.013 } 00:12:19.013 }' 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:19.013 BaseBdev2 00:12:19.013 BaseBdev3 00:12:19.013 BaseBdev4' 00:12:19.013 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.276 20:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.276 [2024-10-17 20:09:04.915343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:19.276 [2024-10-17 20:09:04.915429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.276 [2024-10-17 20:09:04.915493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.535 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.535 "name": "Existed_Raid", 00:12:19.535 "uuid": "bf85cd9d-736a-4921-87b2-9aaa345e900d", 00:12:19.535 "strip_size_kb": 64, 00:12:19.535 "state": "offline", 00:12:19.535 "raid_level": "raid0", 00:12:19.535 "superblock": false, 00:12:19.535 "num_base_bdevs": 4, 00:12:19.535 "num_base_bdevs_discovered": 3, 00:12:19.535 "num_base_bdevs_operational": 3, 00:12:19.535 "base_bdevs_list": [ 00:12:19.535 { 00:12:19.535 "name": null, 00:12:19.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.535 "is_configured": false, 00:12:19.536 "data_offset": 0, 00:12:19.536 "data_size": 65536 00:12:19.536 }, 00:12:19.536 { 00:12:19.536 "name": "BaseBdev2", 00:12:19.536 "uuid": "9c5f2afe-6e96-4ea8-bf94-cbb9d05a330c", 00:12:19.536 "is_configured": true, 00:12:19.536 "data_offset": 0, 00:12:19.536 "data_size": 65536 00:12:19.536 }, 00:12:19.536 { 00:12:19.536 "name": "BaseBdev3", 00:12:19.536 "uuid": "2184b014-497e-4a55-a037-f8be7eae695d", 00:12:19.536 "is_configured": true, 00:12:19.536 "data_offset": 0, 00:12:19.536 "data_size": 65536 00:12:19.536 }, 00:12:19.536 { 00:12:19.536 "name": "BaseBdev4", 00:12:19.536 "uuid": "58a73a78-9126-4661-a277-c9fbc4d512e5", 00:12:19.536 "is_configured": true, 00:12:19.536 "data_offset": 0, 00:12:19.536 "data_size": 65536 00:12:19.536 } 00:12:19.536 ] 00:12:19.536 }' 00:12:19.536 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.536 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.104 [2024-10-17 20:09:05.606198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.104 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.362 [2024-10-17 20:09:05.757269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.362 [2024-10-17 20:09:05.902929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:20.362 [2024-10-17 20:09:05.903182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.362 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:20.363 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.363 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.363 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.363 20:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.363 20:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:20.363 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.621 BaseBdev2 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:20.621 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.622 [ 00:12:20.622 { 00:12:20.622 "name": "BaseBdev2", 00:12:20.622 "aliases": [ 00:12:20.622 "831a090b-e7bf-456a-a17d-88f781b8cefd" 00:12:20.622 ], 00:12:20.622 "product_name": "Malloc disk", 00:12:20.622 "block_size": 512, 00:12:20.622 "num_blocks": 65536, 00:12:20.622 "uuid": "831a090b-e7bf-456a-a17d-88f781b8cefd", 00:12:20.622 "assigned_rate_limits": { 00:12:20.622 "rw_ios_per_sec": 0, 00:12:20.622 "rw_mbytes_per_sec": 0, 00:12:20.622 "r_mbytes_per_sec": 0, 00:12:20.622 "w_mbytes_per_sec": 0 00:12:20.622 }, 00:12:20.622 "claimed": false, 00:12:20.622 "zoned": false, 00:12:20.622 "supported_io_types": { 00:12:20.622 "read": true, 00:12:20.622 "write": true, 00:12:20.622 "unmap": true, 00:12:20.622 "flush": true, 00:12:20.622 "reset": true, 00:12:20.622 "nvme_admin": false, 00:12:20.622 "nvme_io": false, 00:12:20.622 "nvme_io_md": false, 00:12:20.622 "write_zeroes": true, 00:12:20.622 "zcopy": true, 00:12:20.622 "get_zone_info": false, 00:12:20.622 "zone_management": false, 00:12:20.622 "zone_append": false, 00:12:20.622 "compare": false, 00:12:20.622 "compare_and_write": false, 00:12:20.622 "abort": true, 00:12:20.622 "seek_hole": false, 00:12:20.622 "seek_data": false, 00:12:20.622 "copy": true, 00:12:20.622 "nvme_iov_md": false 00:12:20.622 }, 00:12:20.622 "memory_domains": [ 00:12:20.622 { 00:12:20.622 "dma_device_id": "system", 00:12:20.622 "dma_device_type": 1 00:12:20.622 }, 00:12:20.622 { 00:12:20.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.622 "dma_device_type": 2 00:12:20.622 } 00:12:20.622 ], 00:12:20.622 "driver_specific": {} 00:12:20.622 } 00:12:20.622 ] 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.622 BaseBdev3 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.622 [ 00:12:20.622 { 00:12:20.622 "name": "BaseBdev3", 00:12:20.622 "aliases": [ 00:12:20.622 "1e1f8aaa-eefd-472f-8911-f17a6cf31b8b" 00:12:20.622 ], 00:12:20.622 "product_name": "Malloc disk", 00:12:20.622 "block_size": 512, 00:12:20.622 "num_blocks": 65536, 00:12:20.622 "uuid": "1e1f8aaa-eefd-472f-8911-f17a6cf31b8b", 00:12:20.622 "assigned_rate_limits": { 00:12:20.622 "rw_ios_per_sec": 0, 00:12:20.622 "rw_mbytes_per_sec": 0, 00:12:20.622 "r_mbytes_per_sec": 0, 00:12:20.622 "w_mbytes_per_sec": 0 00:12:20.622 }, 00:12:20.622 "claimed": false, 00:12:20.622 "zoned": false, 00:12:20.622 "supported_io_types": { 00:12:20.622 "read": true, 00:12:20.622 "write": true, 00:12:20.622 "unmap": true, 00:12:20.622 "flush": true, 00:12:20.622 "reset": true, 00:12:20.622 "nvme_admin": false, 00:12:20.622 "nvme_io": false, 00:12:20.622 "nvme_io_md": false, 00:12:20.622 "write_zeroes": true, 00:12:20.622 "zcopy": true, 00:12:20.622 "get_zone_info": false, 00:12:20.622 "zone_management": false, 00:12:20.622 "zone_append": false, 00:12:20.622 "compare": false, 00:12:20.622 "compare_and_write": false, 00:12:20.622 "abort": true, 00:12:20.622 "seek_hole": false, 00:12:20.622 "seek_data": false, 00:12:20.622 "copy": true, 00:12:20.622 "nvme_iov_md": false 00:12:20.622 }, 00:12:20.622 "memory_domains": [ 00:12:20.622 { 00:12:20.622 "dma_device_id": "system", 00:12:20.622 "dma_device_type": 1 00:12:20.622 }, 00:12:20.622 { 00:12:20.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.622 "dma_device_type": 2 00:12:20.622 } 00:12:20.622 ], 00:12:20.622 "driver_specific": {} 00:12:20.622 } 00:12:20.622 ] 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.622 BaseBdev4 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.622 [ 00:12:20.622 { 00:12:20.622 "name": "BaseBdev4", 00:12:20.622 "aliases": [ 00:12:20.622 "c65b184c-03e5-4c23-aad1-1dd1f2cb31d7" 00:12:20.622 ], 00:12:20.622 "product_name": "Malloc disk", 00:12:20.622 "block_size": 512, 00:12:20.622 "num_blocks": 65536, 00:12:20.622 "uuid": "c65b184c-03e5-4c23-aad1-1dd1f2cb31d7", 00:12:20.622 "assigned_rate_limits": { 00:12:20.622 "rw_ios_per_sec": 0, 00:12:20.622 "rw_mbytes_per_sec": 0, 00:12:20.622 "r_mbytes_per_sec": 0, 00:12:20.622 "w_mbytes_per_sec": 0 00:12:20.622 }, 00:12:20.622 "claimed": false, 00:12:20.622 "zoned": false, 00:12:20.622 "supported_io_types": { 00:12:20.622 "read": true, 00:12:20.622 "write": true, 00:12:20.622 "unmap": true, 00:12:20.622 "flush": true, 00:12:20.622 "reset": true, 00:12:20.622 "nvme_admin": false, 00:12:20.622 "nvme_io": false, 00:12:20.622 "nvme_io_md": false, 00:12:20.622 "write_zeroes": true, 00:12:20.622 "zcopy": true, 00:12:20.622 "get_zone_info": false, 00:12:20.622 "zone_management": false, 00:12:20.622 "zone_append": false, 00:12:20.622 "compare": false, 00:12:20.622 "compare_and_write": false, 00:12:20.622 "abort": true, 00:12:20.622 "seek_hole": false, 00:12:20.622 "seek_data": false, 00:12:20.622 "copy": true, 00:12:20.622 "nvme_iov_md": false 00:12:20.622 }, 00:12:20.622 "memory_domains": [ 00:12:20.622 { 00:12:20.622 "dma_device_id": "system", 00:12:20.622 "dma_device_type": 1 00:12:20.622 }, 00:12:20.622 { 00:12:20.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.622 "dma_device_type": 2 00:12:20.622 } 00:12:20.622 ], 00:12:20.622 "driver_specific": {} 00:12:20.622 } 00:12:20.622 ] 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.622 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.623 [2024-10-17 20:09:06.272685] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.623 [2024-10-17 20:09:06.272888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.623 [2024-10-17 20:09:06.273042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.881 [2024-10-17 20:09:06.275846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.881 [2024-10-17 20:09:06.275937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.881 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.881 "name": "Existed_Raid", 00:12:20.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.881 "strip_size_kb": 64, 00:12:20.881 "state": "configuring", 00:12:20.881 "raid_level": "raid0", 00:12:20.881 "superblock": false, 00:12:20.881 "num_base_bdevs": 4, 00:12:20.881 "num_base_bdevs_discovered": 3, 00:12:20.881 "num_base_bdevs_operational": 4, 00:12:20.881 "base_bdevs_list": [ 00:12:20.881 { 00:12:20.881 "name": "BaseBdev1", 00:12:20.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.881 "is_configured": false, 00:12:20.881 "data_offset": 0, 00:12:20.881 "data_size": 0 00:12:20.881 }, 00:12:20.881 { 00:12:20.881 "name": "BaseBdev2", 00:12:20.881 "uuid": "831a090b-e7bf-456a-a17d-88f781b8cefd", 00:12:20.881 "is_configured": true, 00:12:20.881 "data_offset": 0, 00:12:20.881 "data_size": 65536 00:12:20.881 }, 00:12:20.881 { 00:12:20.881 "name": "BaseBdev3", 00:12:20.881 "uuid": "1e1f8aaa-eefd-472f-8911-f17a6cf31b8b", 00:12:20.881 "is_configured": true, 00:12:20.881 "data_offset": 0, 00:12:20.881 "data_size": 65536 00:12:20.882 }, 00:12:20.882 { 00:12:20.882 "name": "BaseBdev4", 00:12:20.882 "uuid": "c65b184c-03e5-4c23-aad1-1dd1f2cb31d7", 00:12:20.882 "is_configured": true, 00:12:20.882 "data_offset": 0, 00:12:20.882 "data_size": 65536 00:12:20.882 } 00:12:20.882 ] 00:12:20.882 }' 00:12:20.882 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.882 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.449 [2024-10-17 20:09:06.836896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.449 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.449 "name": "Existed_Raid", 00:12:21.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.449 "strip_size_kb": 64, 00:12:21.449 "state": "configuring", 00:12:21.449 "raid_level": "raid0", 00:12:21.449 "superblock": false, 00:12:21.449 "num_base_bdevs": 4, 00:12:21.449 "num_base_bdevs_discovered": 2, 00:12:21.449 "num_base_bdevs_operational": 4, 00:12:21.449 "base_bdevs_list": [ 00:12:21.449 { 00:12:21.449 "name": "BaseBdev1", 00:12:21.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.449 "is_configured": false, 00:12:21.449 "data_offset": 0, 00:12:21.449 "data_size": 0 00:12:21.449 }, 00:12:21.449 { 00:12:21.449 "name": null, 00:12:21.449 "uuid": "831a090b-e7bf-456a-a17d-88f781b8cefd", 00:12:21.449 "is_configured": false, 00:12:21.449 "data_offset": 0, 00:12:21.449 "data_size": 65536 00:12:21.449 }, 00:12:21.449 { 00:12:21.449 "name": "BaseBdev3", 00:12:21.449 "uuid": "1e1f8aaa-eefd-472f-8911-f17a6cf31b8b", 00:12:21.449 "is_configured": true, 00:12:21.449 "data_offset": 0, 00:12:21.449 "data_size": 65536 00:12:21.449 }, 00:12:21.449 { 00:12:21.449 "name": "BaseBdev4", 00:12:21.449 "uuid": "c65b184c-03e5-4c23-aad1-1dd1f2cb31d7", 00:12:21.450 "is_configured": true, 00:12:21.450 "data_offset": 0, 00:12:21.450 "data_size": 65536 00:12:21.450 } 00:12:21.450 ] 00:12:21.450 }' 00:12:21.450 20:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.450 20:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.024 [2024-10-17 20:09:07.476290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.024 BaseBdev1 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.024 [ 00:12:22.024 { 00:12:22.024 "name": "BaseBdev1", 00:12:22.024 "aliases": [ 00:12:22.024 "df953472-9a54-4a87-90e2-c49c0231c2a0" 00:12:22.024 ], 00:12:22.024 "product_name": "Malloc disk", 00:12:22.024 "block_size": 512, 00:12:22.024 "num_blocks": 65536, 00:12:22.024 "uuid": "df953472-9a54-4a87-90e2-c49c0231c2a0", 00:12:22.024 "assigned_rate_limits": { 00:12:22.024 "rw_ios_per_sec": 0, 00:12:22.024 "rw_mbytes_per_sec": 0, 00:12:22.024 "r_mbytes_per_sec": 0, 00:12:22.024 "w_mbytes_per_sec": 0 00:12:22.024 }, 00:12:22.024 "claimed": true, 00:12:22.024 "claim_type": "exclusive_write", 00:12:22.024 "zoned": false, 00:12:22.024 "supported_io_types": { 00:12:22.024 "read": true, 00:12:22.024 "write": true, 00:12:22.024 "unmap": true, 00:12:22.024 "flush": true, 00:12:22.024 "reset": true, 00:12:22.024 "nvme_admin": false, 00:12:22.024 "nvme_io": false, 00:12:22.024 "nvme_io_md": false, 00:12:22.024 "write_zeroes": true, 00:12:22.024 "zcopy": true, 00:12:22.024 "get_zone_info": false, 00:12:22.024 "zone_management": false, 00:12:22.024 "zone_append": false, 00:12:22.024 "compare": false, 00:12:22.024 "compare_and_write": false, 00:12:22.024 "abort": true, 00:12:22.024 "seek_hole": false, 00:12:22.024 "seek_data": false, 00:12:22.024 "copy": true, 00:12:22.024 "nvme_iov_md": false 00:12:22.024 }, 00:12:22.024 "memory_domains": [ 00:12:22.024 { 00:12:22.024 "dma_device_id": "system", 00:12:22.024 "dma_device_type": 1 00:12:22.024 }, 00:12:22.024 { 00:12:22.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.024 "dma_device_type": 2 00:12:22.024 } 00:12:22.024 ], 00:12:22.024 "driver_specific": {} 00:12:22.024 } 00:12:22.024 ] 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.024 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.024 "name": "Existed_Raid", 00:12:22.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.024 "strip_size_kb": 64, 00:12:22.024 "state": "configuring", 00:12:22.024 "raid_level": "raid0", 00:12:22.024 "superblock": false, 00:12:22.024 "num_base_bdevs": 4, 00:12:22.024 "num_base_bdevs_discovered": 3, 00:12:22.024 "num_base_bdevs_operational": 4, 00:12:22.024 "base_bdevs_list": [ 00:12:22.024 { 00:12:22.024 "name": "BaseBdev1", 00:12:22.024 "uuid": "df953472-9a54-4a87-90e2-c49c0231c2a0", 00:12:22.024 "is_configured": true, 00:12:22.024 "data_offset": 0, 00:12:22.024 "data_size": 65536 00:12:22.024 }, 00:12:22.024 { 00:12:22.024 "name": null, 00:12:22.025 "uuid": "831a090b-e7bf-456a-a17d-88f781b8cefd", 00:12:22.025 "is_configured": false, 00:12:22.025 "data_offset": 0, 00:12:22.025 "data_size": 65536 00:12:22.025 }, 00:12:22.025 { 00:12:22.025 "name": "BaseBdev3", 00:12:22.025 "uuid": "1e1f8aaa-eefd-472f-8911-f17a6cf31b8b", 00:12:22.025 "is_configured": true, 00:12:22.025 "data_offset": 0, 00:12:22.025 "data_size": 65536 00:12:22.025 }, 00:12:22.025 { 00:12:22.025 "name": "BaseBdev4", 00:12:22.025 "uuid": "c65b184c-03e5-4c23-aad1-1dd1f2cb31d7", 00:12:22.025 "is_configured": true, 00:12:22.025 "data_offset": 0, 00:12:22.025 "data_size": 65536 00:12:22.025 } 00:12:22.025 ] 00:12:22.025 }' 00:12:22.025 20:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.025 20:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.592 [2024-10-17 20:09:08.160628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.592 "name": "Existed_Raid", 00:12:22.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.592 "strip_size_kb": 64, 00:12:22.592 "state": "configuring", 00:12:22.592 "raid_level": "raid0", 00:12:22.592 "superblock": false, 00:12:22.592 "num_base_bdevs": 4, 00:12:22.592 "num_base_bdevs_discovered": 2, 00:12:22.592 "num_base_bdevs_operational": 4, 00:12:22.592 "base_bdevs_list": [ 00:12:22.592 { 00:12:22.592 "name": "BaseBdev1", 00:12:22.592 "uuid": "df953472-9a54-4a87-90e2-c49c0231c2a0", 00:12:22.592 "is_configured": true, 00:12:22.592 "data_offset": 0, 00:12:22.592 "data_size": 65536 00:12:22.592 }, 00:12:22.592 { 00:12:22.592 "name": null, 00:12:22.592 "uuid": "831a090b-e7bf-456a-a17d-88f781b8cefd", 00:12:22.592 "is_configured": false, 00:12:22.592 "data_offset": 0, 00:12:22.592 "data_size": 65536 00:12:22.592 }, 00:12:22.592 { 00:12:22.592 "name": null, 00:12:22.592 "uuid": "1e1f8aaa-eefd-472f-8911-f17a6cf31b8b", 00:12:22.592 "is_configured": false, 00:12:22.592 "data_offset": 0, 00:12:22.592 "data_size": 65536 00:12:22.592 }, 00:12:22.592 { 00:12:22.592 "name": "BaseBdev4", 00:12:22.592 "uuid": "c65b184c-03e5-4c23-aad1-1dd1f2cb31d7", 00:12:22.592 "is_configured": true, 00:12:22.592 "data_offset": 0, 00:12:22.592 "data_size": 65536 00:12:22.592 } 00:12:22.592 ] 00:12:22.592 }' 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.592 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.159 [2024-10-17 20:09:08.772789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.159 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.418 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.418 "name": "Existed_Raid", 00:12:23.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.418 "strip_size_kb": 64, 00:12:23.418 "state": "configuring", 00:12:23.418 "raid_level": "raid0", 00:12:23.418 "superblock": false, 00:12:23.418 "num_base_bdevs": 4, 00:12:23.418 "num_base_bdevs_discovered": 3, 00:12:23.418 "num_base_bdevs_operational": 4, 00:12:23.418 "base_bdevs_list": [ 00:12:23.418 { 00:12:23.418 "name": "BaseBdev1", 00:12:23.418 "uuid": "df953472-9a54-4a87-90e2-c49c0231c2a0", 00:12:23.418 "is_configured": true, 00:12:23.418 "data_offset": 0, 00:12:23.418 "data_size": 65536 00:12:23.418 }, 00:12:23.418 { 00:12:23.418 "name": null, 00:12:23.418 "uuid": "831a090b-e7bf-456a-a17d-88f781b8cefd", 00:12:23.418 "is_configured": false, 00:12:23.418 "data_offset": 0, 00:12:23.418 "data_size": 65536 00:12:23.418 }, 00:12:23.418 { 00:12:23.418 "name": "BaseBdev3", 00:12:23.418 "uuid": "1e1f8aaa-eefd-472f-8911-f17a6cf31b8b", 00:12:23.418 "is_configured": true, 00:12:23.418 "data_offset": 0, 00:12:23.418 "data_size": 65536 00:12:23.418 }, 00:12:23.418 { 00:12:23.418 "name": "BaseBdev4", 00:12:23.418 "uuid": "c65b184c-03e5-4c23-aad1-1dd1f2cb31d7", 00:12:23.418 "is_configured": true, 00:12:23.418 "data_offset": 0, 00:12:23.418 "data_size": 65536 00:12:23.418 } 00:12:23.418 ] 00:12:23.418 }' 00:12:23.418 20:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.418 20:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.677 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.677 20:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.677 20:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.677 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.937 [2024-10-17 20:09:09.376991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.937 "name": "Existed_Raid", 00:12:23.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.937 "strip_size_kb": 64, 00:12:23.937 "state": "configuring", 00:12:23.937 "raid_level": "raid0", 00:12:23.937 "superblock": false, 00:12:23.937 "num_base_bdevs": 4, 00:12:23.937 "num_base_bdevs_discovered": 2, 00:12:23.937 "num_base_bdevs_operational": 4, 00:12:23.937 "base_bdevs_list": [ 00:12:23.937 { 00:12:23.937 "name": null, 00:12:23.937 "uuid": "df953472-9a54-4a87-90e2-c49c0231c2a0", 00:12:23.937 "is_configured": false, 00:12:23.937 "data_offset": 0, 00:12:23.937 "data_size": 65536 00:12:23.937 }, 00:12:23.937 { 00:12:23.937 "name": null, 00:12:23.937 "uuid": "831a090b-e7bf-456a-a17d-88f781b8cefd", 00:12:23.937 "is_configured": false, 00:12:23.937 "data_offset": 0, 00:12:23.937 "data_size": 65536 00:12:23.937 }, 00:12:23.937 { 00:12:23.937 "name": "BaseBdev3", 00:12:23.937 "uuid": "1e1f8aaa-eefd-472f-8911-f17a6cf31b8b", 00:12:23.937 "is_configured": true, 00:12:23.937 "data_offset": 0, 00:12:23.937 "data_size": 65536 00:12:23.937 }, 00:12:23.937 { 00:12:23.937 "name": "BaseBdev4", 00:12:23.937 "uuid": "c65b184c-03e5-4c23-aad1-1dd1f2cb31d7", 00:12:23.937 "is_configured": true, 00:12:23.937 "data_offset": 0, 00:12:23.937 "data_size": 65536 00:12:23.937 } 00:12:23.937 ] 00:12:23.937 }' 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.937 20:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.503 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.503 20:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.503 20:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:24.504 20:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.504 20:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.504 [2024-10-17 20:09:10.055175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.504 "name": "Existed_Raid", 00:12:24.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.504 "strip_size_kb": 64, 00:12:24.504 "state": "configuring", 00:12:24.504 "raid_level": "raid0", 00:12:24.504 "superblock": false, 00:12:24.504 "num_base_bdevs": 4, 00:12:24.504 "num_base_bdevs_discovered": 3, 00:12:24.504 "num_base_bdevs_operational": 4, 00:12:24.504 "base_bdevs_list": [ 00:12:24.504 { 00:12:24.504 "name": null, 00:12:24.504 "uuid": "df953472-9a54-4a87-90e2-c49c0231c2a0", 00:12:24.504 "is_configured": false, 00:12:24.504 "data_offset": 0, 00:12:24.504 "data_size": 65536 00:12:24.504 }, 00:12:24.504 { 00:12:24.504 "name": "BaseBdev2", 00:12:24.504 "uuid": "831a090b-e7bf-456a-a17d-88f781b8cefd", 00:12:24.504 "is_configured": true, 00:12:24.504 "data_offset": 0, 00:12:24.504 "data_size": 65536 00:12:24.504 }, 00:12:24.504 { 00:12:24.504 "name": "BaseBdev3", 00:12:24.504 "uuid": "1e1f8aaa-eefd-472f-8911-f17a6cf31b8b", 00:12:24.504 "is_configured": true, 00:12:24.504 "data_offset": 0, 00:12:24.504 "data_size": 65536 00:12:24.504 }, 00:12:24.504 { 00:12:24.504 "name": "BaseBdev4", 00:12:24.504 "uuid": "c65b184c-03e5-4c23-aad1-1dd1f2cb31d7", 00:12:24.504 "is_configured": true, 00:12:24.504 "data_offset": 0, 00:12:24.504 "data_size": 65536 00:12:24.504 } 00:12:24.504 ] 00:12:24.504 }' 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.504 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u df953472-9a54-4a87-90e2-c49c0231c2a0 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.071 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.330 [2024-10-17 20:09:10.727291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:25.330 [2024-10-17 20:09:10.727641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:25.330 [2024-10-17 20:09:10.727668] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:25.330 [2024-10-17 20:09:10.728020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:25.330 [2024-10-17 20:09:10.728270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:25.330 [2024-10-17 20:09:10.728294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:25.330 NewBaseBdev 00:12:25.330 [2024-10-17 20:09:10.728601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.330 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.330 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:25.330 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:25.330 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:25.330 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:25.330 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.331 [ 00:12:25.331 { 00:12:25.331 "name": "NewBaseBdev", 00:12:25.331 "aliases": [ 00:12:25.331 "df953472-9a54-4a87-90e2-c49c0231c2a0" 00:12:25.331 ], 00:12:25.331 "product_name": "Malloc disk", 00:12:25.331 "block_size": 512, 00:12:25.331 "num_blocks": 65536, 00:12:25.331 "uuid": "df953472-9a54-4a87-90e2-c49c0231c2a0", 00:12:25.331 "assigned_rate_limits": { 00:12:25.331 "rw_ios_per_sec": 0, 00:12:25.331 "rw_mbytes_per_sec": 0, 00:12:25.331 "r_mbytes_per_sec": 0, 00:12:25.331 "w_mbytes_per_sec": 0 00:12:25.331 }, 00:12:25.331 "claimed": true, 00:12:25.331 "claim_type": "exclusive_write", 00:12:25.331 "zoned": false, 00:12:25.331 "supported_io_types": { 00:12:25.331 "read": true, 00:12:25.331 "write": true, 00:12:25.331 "unmap": true, 00:12:25.331 "flush": true, 00:12:25.331 "reset": true, 00:12:25.331 "nvme_admin": false, 00:12:25.331 "nvme_io": false, 00:12:25.331 "nvme_io_md": false, 00:12:25.331 "write_zeroes": true, 00:12:25.331 "zcopy": true, 00:12:25.331 "get_zone_info": false, 00:12:25.331 "zone_management": false, 00:12:25.331 "zone_append": false, 00:12:25.331 "compare": false, 00:12:25.331 "compare_and_write": false, 00:12:25.331 "abort": true, 00:12:25.331 "seek_hole": false, 00:12:25.331 "seek_data": false, 00:12:25.331 "copy": true, 00:12:25.331 "nvme_iov_md": false 00:12:25.331 }, 00:12:25.331 "memory_domains": [ 00:12:25.331 { 00:12:25.331 "dma_device_id": "system", 00:12:25.331 "dma_device_type": 1 00:12:25.331 }, 00:12:25.331 { 00:12:25.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.331 "dma_device_type": 2 00:12:25.331 } 00:12:25.331 ], 00:12:25.331 "driver_specific": {} 00:12:25.331 } 00:12:25.331 ] 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.331 "name": "Existed_Raid", 00:12:25.331 "uuid": "01db8af0-f311-4621-9be1-a758f33a07ee", 00:12:25.331 "strip_size_kb": 64, 00:12:25.331 "state": "online", 00:12:25.331 "raid_level": "raid0", 00:12:25.331 "superblock": false, 00:12:25.331 "num_base_bdevs": 4, 00:12:25.331 "num_base_bdevs_discovered": 4, 00:12:25.331 "num_base_bdevs_operational": 4, 00:12:25.331 "base_bdevs_list": [ 00:12:25.331 { 00:12:25.331 "name": "NewBaseBdev", 00:12:25.331 "uuid": "df953472-9a54-4a87-90e2-c49c0231c2a0", 00:12:25.331 "is_configured": true, 00:12:25.331 "data_offset": 0, 00:12:25.331 "data_size": 65536 00:12:25.331 }, 00:12:25.331 { 00:12:25.331 "name": "BaseBdev2", 00:12:25.331 "uuid": "831a090b-e7bf-456a-a17d-88f781b8cefd", 00:12:25.331 "is_configured": true, 00:12:25.331 "data_offset": 0, 00:12:25.331 "data_size": 65536 00:12:25.331 }, 00:12:25.331 { 00:12:25.331 "name": "BaseBdev3", 00:12:25.331 "uuid": "1e1f8aaa-eefd-472f-8911-f17a6cf31b8b", 00:12:25.331 "is_configured": true, 00:12:25.331 "data_offset": 0, 00:12:25.331 "data_size": 65536 00:12:25.331 }, 00:12:25.331 { 00:12:25.331 "name": "BaseBdev4", 00:12:25.331 "uuid": "c65b184c-03e5-4c23-aad1-1dd1f2cb31d7", 00:12:25.331 "is_configured": true, 00:12:25.331 "data_offset": 0, 00:12:25.331 "data_size": 65536 00:12:25.331 } 00:12:25.331 ] 00:12:25.331 }' 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.331 20:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.898 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:25.898 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:25.898 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.898 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.898 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.898 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.898 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:25.898 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.898 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.898 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.898 [2024-10-17 20:09:11.295986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.898 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.898 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.898 "name": "Existed_Raid", 00:12:25.898 "aliases": [ 00:12:25.898 "01db8af0-f311-4621-9be1-a758f33a07ee" 00:12:25.898 ], 00:12:25.898 "product_name": "Raid Volume", 00:12:25.898 "block_size": 512, 00:12:25.898 "num_blocks": 262144, 00:12:25.898 "uuid": "01db8af0-f311-4621-9be1-a758f33a07ee", 00:12:25.898 "assigned_rate_limits": { 00:12:25.898 "rw_ios_per_sec": 0, 00:12:25.898 "rw_mbytes_per_sec": 0, 00:12:25.898 "r_mbytes_per_sec": 0, 00:12:25.898 "w_mbytes_per_sec": 0 00:12:25.898 }, 00:12:25.898 "claimed": false, 00:12:25.898 "zoned": false, 00:12:25.898 "supported_io_types": { 00:12:25.898 "read": true, 00:12:25.898 "write": true, 00:12:25.898 "unmap": true, 00:12:25.898 "flush": true, 00:12:25.898 "reset": true, 00:12:25.898 "nvme_admin": false, 00:12:25.898 "nvme_io": false, 00:12:25.898 "nvme_io_md": false, 00:12:25.899 "write_zeroes": true, 00:12:25.899 "zcopy": false, 00:12:25.899 "get_zone_info": false, 00:12:25.899 "zone_management": false, 00:12:25.899 "zone_append": false, 00:12:25.899 "compare": false, 00:12:25.899 "compare_and_write": false, 00:12:25.899 "abort": false, 00:12:25.899 "seek_hole": false, 00:12:25.899 "seek_data": false, 00:12:25.899 "copy": false, 00:12:25.899 "nvme_iov_md": false 00:12:25.899 }, 00:12:25.899 "memory_domains": [ 00:12:25.899 { 00:12:25.899 "dma_device_id": "system", 00:12:25.899 "dma_device_type": 1 00:12:25.899 }, 00:12:25.899 { 00:12:25.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.899 "dma_device_type": 2 00:12:25.899 }, 00:12:25.899 { 00:12:25.899 "dma_device_id": "system", 00:12:25.899 "dma_device_type": 1 00:12:25.899 }, 00:12:25.899 { 00:12:25.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.899 "dma_device_type": 2 00:12:25.899 }, 00:12:25.899 { 00:12:25.899 "dma_device_id": "system", 00:12:25.899 "dma_device_type": 1 00:12:25.899 }, 00:12:25.899 { 00:12:25.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.899 "dma_device_type": 2 00:12:25.899 }, 00:12:25.899 { 00:12:25.899 "dma_device_id": "system", 00:12:25.899 "dma_device_type": 1 00:12:25.899 }, 00:12:25.899 { 00:12:25.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.899 "dma_device_type": 2 00:12:25.899 } 00:12:25.899 ], 00:12:25.899 "driver_specific": { 00:12:25.899 "raid": { 00:12:25.899 "uuid": "01db8af0-f311-4621-9be1-a758f33a07ee", 00:12:25.899 "strip_size_kb": 64, 00:12:25.899 "state": "online", 00:12:25.899 "raid_level": "raid0", 00:12:25.899 "superblock": false, 00:12:25.899 "num_base_bdevs": 4, 00:12:25.899 "num_base_bdevs_discovered": 4, 00:12:25.899 "num_base_bdevs_operational": 4, 00:12:25.899 "base_bdevs_list": [ 00:12:25.899 { 00:12:25.899 "name": "NewBaseBdev", 00:12:25.899 "uuid": "df953472-9a54-4a87-90e2-c49c0231c2a0", 00:12:25.899 "is_configured": true, 00:12:25.899 "data_offset": 0, 00:12:25.899 "data_size": 65536 00:12:25.899 }, 00:12:25.899 { 00:12:25.899 "name": "BaseBdev2", 00:12:25.899 "uuid": "831a090b-e7bf-456a-a17d-88f781b8cefd", 00:12:25.899 "is_configured": true, 00:12:25.899 "data_offset": 0, 00:12:25.899 "data_size": 65536 00:12:25.899 }, 00:12:25.899 { 00:12:25.899 "name": "BaseBdev3", 00:12:25.899 "uuid": "1e1f8aaa-eefd-472f-8911-f17a6cf31b8b", 00:12:25.899 "is_configured": true, 00:12:25.899 "data_offset": 0, 00:12:25.899 "data_size": 65536 00:12:25.899 }, 00:12:25.899 { 00:12:25.899 "name": "BaseBdev4", 00:12:25.899 "uuid": "c65b184c-03e5-4c23-aad1-1dd1f2cb31d7", 00:12:25.899 "is_configured": true, 00:12:25.899 "data_offset": 0, 00:12:25.899 "data_size": 65536 00:12:25.899 } 00:12:25.899 ] 00:12:25.899 } 00:12:25.899 } 00:12:25.899 }' 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:25.899 BaseBdev2 00:12:25.899 BaseBdev3 00:12:25.899 BaseBdev4' 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.899 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.159 [2024-10-17 20:09:11.687736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:26.159 [2024-10-17 20:09:11.687900] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.159 [2024-10-17 20:09:11.688137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.159 [2024-10-17 20:09:11.688353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.159 [2024-10-17 20:09:11.688381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69361 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69361 ']' 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69361 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69361 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69361' 00:12:26.159 killing process with pid 69361 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69361 00:12:26.159 [2024-10-17 20:09:11.726425] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:26.159 20:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69361 00:12:26.727 [2024-10-17 20:09:12.082940] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:27.664 00:12:27.664 real 0m13.211s 00:12:27.664 user 0m21.980s 00:12:27.664 sys 0m1.848s 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.664 ************************************ 00:12:27.664 END TEST raid_state_function_test 00:12:27.664 ************************************ 00:12:27.664 20:09:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:27.664 20:09:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:27.664 20:09:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.664 20:09:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.664 ************************************ 00:12:27.664 START TEST raid_state_function_test_sb 00:12:27.664 ************************************ 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70049 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70049' 00:12:27.664 Process raid pid: 70049 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70049 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 70049 ']' 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:27.664 20:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.664 [2024-10-17 20:09:13.297201] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:12:27.664 [2024-10-17 20:09:13.297617] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.923 [2024-10-17 20:09:13.477294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.181 [2024-10-17 20:09:13.608122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.181 [2024-10-17 20:09:13.820552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.181 [2024-10-17 20:09:13.820753] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.748 [2024-10-17 20:09:14.287817] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:28.748 [2024-10-17 20:09:14.287899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:28.748 [2024-10-17 20:09:14.287916] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.748 [2024-10-17 20:09:14.287933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.748 [2024-10-17 20:09:14.287943] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:28.748 [2024-10-17 20:09:14.287957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:28.748 [2024-10-17 20:09:14.287967] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:28.748 [2024-10-17 20:09:14.287980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.748 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.749 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.749 "name": "Existed_Raid", 00:12:28.749 "uuid": "11b286ff-d361-4c0f-a3d5-758a2eeed9bf", 00:12:28.749 "strip_size_kb": 64, 00:12:28.749 "state": "configuring", 00:12:28.749 "raid_level": "raid0", 00:12:28.749 "superblock": true, 00:12:28.749 "num_base_bdevs": 4, 00:12:28.749 "num_base_bdevs_discovered": 0, 00:12:28.749 "num_base_bdevs_operational": 4, 00:12:28.749 "base_bdevs_list": [ 00:12:28.749 { 00:12:28.749 "name": "BaseBdev1", 00:12:28.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.749 "is_configured": false, 00:12:28.749 "data_offset": 0, 00:12:28.749 "data_size": 0 00:12:28.749 }, 00:12:28.749 { 00:12:28.749 "name": "BaseBdev2", 00:12:28.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.749 "is_configured": false, 00:12:28.749 "data_offset": 0, 00:12:28.749 "data_size": 0 00:12:28.749 }, 00:12:28.749 { 00:12:28.749 "name": "BaseBdev3", 00:12:28.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.749 "is_configured": false, 00:12:28.749 "data_offset": 0, 00:12:28.749 "data_size": 0 00:12:28.749 }, 00:12:28.749 { 00:12:28.749 "name": "BaseBdev4", 00:12:28.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.749 "is_configured": false, 00:12:28.749 "data_offset": 0, 00:12:28.749 "data_size": 0 00:12:28.749 } 00:12:28.749 ] 00:12:28.749 }' 00:12:28.749 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.749 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.316 [2024-10-17 20:09:14.847911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:29.316 [2024-10-17 20:09:14.847963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.316 [2024-10-17 20:09:14.855985] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:29.316 [2024-10-17 20:09:14.856107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:29.316 [2024-10-17 20:09:14.856126] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:29.316 [2024-10-17 20:09:14.856144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:29.316 [2024-10-17 20:09:14.856154] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:29.316 [2024-10-17 20:09:14.856170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:29.316 [2024-10-17 20:09:14.856180] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:29.316 [2024-10-17 20:09:14.856194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.316 [2024-10-17 20:09:14.903549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.316 BaseBdev1 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.316 [ 00:12:29.316 { 00:12:29.316 "name": "BaseBdev1", 00:12:29.316 "aliases": [ 00:12:29.316 "805bcd08-871c-431b-a0b1-3852a27abfa6" 00:12:29.316 ], 00:12:29.316 "product_name": "Malloc disk", 00:12:29.316 "block_size": 512, 00:12:29.316 "num_blocks": 65536, 00:12:29.316 "uuid": "805bcd08-871c-431b-a0b1-3852a27abfa6", 00:12:29.316 "assigned_rate_limits": { 00:12:29.316 "rw_ios_per_sec": 0, 00:12:29.316 "rw_mbytes_per_sec": 0, 00:12:29.316 "r_mbytes_per_sec": 0, 00:12:29.316 "w_mbytes_per_sec": 0 00:12:29.316 }, 00:12:29.316 "claimed": true, 00:12:29.316 "claim_type": "exclusive_write", 00:12:29.316 "zoned": false, 00:12:29.316 "supported_io_types": { 00:12:29.316 "read": true, 00:12:29.316 "write": true, 00:12:29.316 "unmap": true, 00:12:29.316 "flush": true, 00:12:29.316 "reset": true, 00:12:29.316 "nvme_admin": false, 00:12:29.316 "nvme_io": false, 00:12:29.316 "nvme_io_md": false, 00:12:29.316 "write_zeroes": true, 00:12:29.316 "zcopy": true, 00:12:29.316 "get_zone_info": false, 00:12:29.316 "zone_management": false, 00:12:29.316 "zone_append": false, 00:12:29.316 "compare": false, 00:12:29.316 "compare_and_write": false, 00:12:29.316 "abort": true, 00:12:29.316 "seek_hole": false, 00:12:29.316 "seek_data": false, 00:12:29.316 "copy": true, 00:12:29.316 "nvme_iov_md": false 00:12:29.316 }, 00:12:29.316 "memory_domains": [ 00:12:29.316 { 00:12:29.316 "dma_device_id": "system", 00:12:29.316 "dma_device_type": 1 00:12:29.316 }, 00:12:29.316 { 00:12:29.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.316 "dma_device_type": 2 00:12:29.316 } 00:12:29.316 ], 00:12:29.316 "driver_specific": {} 00:12:29.316 } 00:12:29.316 ] 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.316 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.575 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.575 "name": "Existed_Raid", 00:12:29.575 "uuid": "d66756d1-8030-40f6-92c6-0759ff583202", 00:12:29.575 "strip_size_kb": 64, 00:12:29.575 "state": "configuring", 00:12:29.575 "raid_level": "raid0", 00:12:29.575 "superblock": true, 00:12:29.575 "num_base_bdevs": 4, 00:12:29.575 "num_base_bdevs_discovered": 1, 00:12:29.575 "num_base_bdevs_operational": 4, 00:12:29.575 "base_bdevs_list": [ 00:12:29.575 { 00:12:29.575 "name": "BaseBdev1", 00:12:29.575 "uuid": "805bcd08-871c-431b-a0b1-3852a27abfa6", 00:12:29.575 "is_configured": true, 00:12:29.575 "data_offset": 2048, 00:12:29.575 "data_size": 63488 00:12:29.575 }, 00:12:29.575 { 00:12:29.575 "name": "BaseBdev2", 00:12:29.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.575 "is_configured": false, 00:12:29.575 "data_offset": 0, 00:12:29.575 "data_size": 0 00:12:29.575 }, 00:12:29.575 { 00:12:29.575 "name": "BaseBdev3", 00:12:29.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.575 "is_configured": false, 00:12:29.575 "data_offset": 0, 00:12:29.575 "data_size": 0 00:12:29.575 }, 00:12:29.575 { 00:12:29.575 "name": "BaseBdev4", 00:12:29.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.575 "is_configured": false, 00:12:29.575 "data_offset": 0, 00:12:29.575 "data_size": 0 00:12:29.575 } 00:12:29.575 ] 00:12:29.575 }' 00:12:29.575 20:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.575 20:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.834 [2024-10-17 20:09:15.459764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:29.834 [2024-10-17 20:09:15.459836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.834 [2024-10-17 20:09:15.467857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.834 [2024-10-17 20:09:15.470604] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:29.834 [2024-10-17 20:09:15.470675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:29.834 [2024-10-17 20:09:15.470691] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:29.834 [2024-10-17 20:09:15.470708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:29.834 [2024-10-17 20:09:15.470719] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:29.834 [2024-10-17 20:09:15.470732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.834 20:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.093 20:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.093 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.093 "name": "Existed_Raid", 00:12:30.093 "uuid": "a3b8e0a8-4ae3-4159-9a69-dfd35e20261d", 00:12:30.093 "strip_size_kb": 64, 00:12:30.093 "state": "configuring", 00:12:30.093 "raid_level": "raid0", 00:12:30.093 "superblock": true, 00:12:30.093 "num_base_bdevs": 4, 00:12:30.093 "num_base_bdevs_discovered": 1, 00:12:30.093 "num_base_bdevs_operational": 4, 00:12:30.093 "base_bdevs_list": [ 00:12:30.093 { 00:12:30.093 "name": "BaseBdev1", 00:12:30.093 "uuid": "805bcd08-871c-431b-a0b1-3852a27abfa6", 00:12:30.093 "is_configured": true, 00:12:30.093 "data_offset": 2048, 00:12:30.093 "data_size": 63488 00:12:30.093 }, 00:12:30.093 { 00:12:30.093 "name": "BaseBdev2", 00:12:30.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.093 "is_configured": false, 00:12:30.093 "data_offset": 0, 00:12:30.093 "data_size": 0 00:12:30.093 }, 00:12:30.093 { 00:12:30.093 "name": "BaseBdev3", 00:12:30.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.094 "is_configured": false, 00:12:30.094 "data_offset": 0, 00:12:30.094 "data_size": 0 00:12:30.094 }, 00:12:30.094 { 00:12:30.094 "name": "BaseBdev4", 00:12:30.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.094 "is_configured": false, 00:12:30.094 "data_offset": 0, 00:12:30.094 "data_size": 0 00:12:30.094 } 00:12:30.094 ] 00:12:30.094 }' 00:12:30.094 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.094 20:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.352 20:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:30.352 20:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.352 20:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.612 [2024-10-17 20:09:16.039784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.612 BaseBdev2 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.612 [ 00:12:30.612 { 00:12:30.612 "name": "BaseBdev2", 00:12:30.612 "aliases": [ 00:12:30.612 "5c8ce887-e35a-44d4-b26d-f31caa7f39b4" 00:12:30.612 ], 00:12:30.612 "product_name": "Malloc disk", 00:12:30.612 "block_size": 512, 00:12:30.612 "num_blocks": 65536, 00:12:30.612 "uuid": "5c8ce887-e35a-44d4-b26d-f31caa7f39b4", 00:12:30.612 "assigned_rate_limits": { 00:12:30.612 "rw_ios_per_sec": 0, 00:12:30.612 "rw_mbytes_per_sec": 0, 00:12:30.612 "r_mbytes_per_sec": 0, 00:12:30.612 "w_mbytes_per_sec": 0 00:12:30.612 }, 00:12:30.612 "claimed": true, 00:12:30.612 "claim_type": "exclusive_write", 00:12:30.612 "zoned": false, 00:12:30.612 "supported_io_types": { 00:12:30.612 "read": true, 00:12:30.612 "write": true, 00:12:30.612 "unmap": true, 00:12:30.612 "flush": true, 00:12:30.612 "reset": true, 00:12:30.612 "nvme_admin": false, 00:12:30.612 "nvme_io": false, 00:12:30.612 "nvme_io_md": false, 00:12:30.612 "write_zeroes": true, 00:12:30.612 "zcopy": true, 00:12:30.612 "get_zone_info": false, 00:12:30.612 "zone_management": false, 00:12:30.612 "zone_append": false, 00:12:30.612 "compare": false, 00:12:30.612 "compare_and_write": false, 00:12:30.612 "abort": true, 00:12:30.612 "seek_hole": false, 00:12:30.612 "seek_data": false, 00:12:30.612 "copy": true, 00:12:30.612 "nvme_iov_md": false 00:12:30.612 }, 00:12:30.612 "memory_domains": [ 00:12:30.612 { 00:12:30.612 "dma_device_id": "system", 00:12:30.612 "dma_device_type": 1 00:12:30.612 }, 00:12:30.612 { 00:12:30.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.612 "dma_device_type": 2 00:12:30.612 } 00:12:30.612 ], 00:12:30.612 "driver_specific": {} 00:12:30.612 } 00:12:30.612 ] 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.612 "name": "Existed_Raid", 00:12:30.612 "uuid": "a3b8e0a8-4ae3-4159-9a69-dfd35e20261d", 00:12:30.612 "strip_size_kb": 64, 00:12:30.612 "state": "configuring", 00:12:30.612 "raid_level": "raid0", 00:12:30.612 "superblock": true, 00:12:30.612 "num_base_bdevs": 4, 00:12:30.612 "num_base_bdevs_discovered": 2, 00:12:30.612 "num_base_bdevs_operational": 4, 00:12:30.612 "base_bdevs_list": [ 00:12:30.612 { 00:12:30.612 "name": "BaseBdev1", 00:12:30.612 "uuid": "805bcd08-871c-431b-a0b1-3852a27abfa6", 00:12:30.612 "is_configured": true, 00:12:30.612 "data_offset": 2048, 00:12:30.612 "data_size": 63488 00:12:30.612 }, 00:12:30.612 { 00:12:30.612 "name": "BaseBdev2", 00:12:30.612 "uuid": "5c8ce887-e35a-44d4-b26d-f31caa7f39b4", 00:12:30.612 "is_configured": true, 00:12:30.612 "data_offset": 2048, 00:12:30.612 "data_size": 63488 00:12:30.612 }, 00:12:30.612 { 00:12:30.612 "name": "BaseBdev3", 00:12:30.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.612 "is_configured": false, 00:12:30.612 "data_offset": 0, 00:12:30.612 "data_size": 0 00:12:30.612 }, 00:12:30.612 { 00:12:30.612 "name": "BaseBdev4", 00:12:30.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.612 "is_configured": false, 00:12:30.612 "data_offset": 0, 00:12:30.612 "data_size": 0 00:12:30.612 } 00:12:30.612 ] 00:12:30.612 }' 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.612 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.180 BaseBdev3 00:12:31.180 [2024-10-17 20:09:16.661235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.180 [ 00:12:31.180 { 00:12:31.180 "name": "BaseBdev3", 00:12:31.180 "aliases": [ 00:12:31.180 "ff2b1877-1f7b-4f5d-bb39-fa7043dc4aa9" 00:12:31.180 ], 00:12:31.180 "product_name": "Malloc disk", 00:12:31.180 "block_size": 512, 00:12:31.180 "num_blocks": 65536, 00:12:31.180 "uuid": "ff2b1877-1f7b-4f5d-bb39-fa7043dc4aa9", 00:12:31.180 "assigned_rate_limits": { 00:12:31.180 "rw_ios_per_sec": 0, 00:12:31.180 "rw_mbytes_per_sec": 0, 00:12:31.180 "r_mbytes_per_sec": 0, 00:12:31.180 "w_mbytes_per_sec": 0 00:12:31.180 }, 00:12:31.180 "claimed": true, 00:12:31.180 "claim_type": "exclusive_write", 00:12:31.180 "zoned": false, 00:12:31.180 "supported_io_types": { 00:12:31.180 "read": true, 00:12:31.180 "write": true, 00:12:31.180 "unmap": true, 00:12:31.180 "flush": true, 00:12:31.180 "reset": true, 00:12:31.180 "nvme_admin": false, 00:12:31.180 "nvme_io": false, 00:12:31.180 "nvme_io_md": false, 00:12:31.180 "write_zeroes": true, 00:12:31.180 "zcopy": true, 00:12:31.180 "get_zone_info": false, 00:12:31.180 "zone_management": false, 00:12:31.180 "zone_append": false, 00:12:31.180 "compare": false, 00:12:31.180 "compare_and_write": false, 00:12:31.180 "abort": true, 00:12:31.180 "seek_hole": false, 00:12:31.180 "seek_data": false, 00:12:31.180 "copy": true, 00:12:31.180 "nvme_iov_md": false 00:12:31.180 }, 00:12:31.180 "memory_domains": [ 00:12:31.180 { 00:12:31.180 "dma_device_id": "system", 00:12:31.180 "dma_device_type": 1 00:12:31.180 }, 00:12:31.180 { 00:12:31.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.180 "dma_device_type": 2 00:12:31.180 } 00:12:31.180 ], 00:12:31.180 "driver_specific": {} 00:12:31.180 } 00:12:31.180 ] 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.180 "name": "Existed_Raid", 00:12:31.180 "uuid": "a3b8e0a8-4ae3-4159-9a69-dfd35e20261d", 00:12:31.180 "strip_size_kb": 64, 00:12:31.180 "state": "configuring", 00:12:31.180 "raid_level": "raid0", 00:12:31.180 "superblock": true, 00:12:31.180 "num_base_bdevs": 4, 00:12:31.180 "num_base_bdevs_discovered": 3, 00:12:31.180 "num_base_bdevs_operational": 4, 00:12:31.180 "base_bdevs_list": [ 00:12:31.180 { 00:12:31.180 "name": "BaseBdev1", 00:12:31.180 "uuid": "805bcd08-871c-431b-a0b1-3852a27abfa6", 00:12:31.180 "is_configured": true, 00:12:31.180 "data_offset": 2048, 00:12:31.180 "data_size": 63488 00:12:31.180 }, 00:12:31.180 { 00:12:31.180 "name": "BaseBdev2", 00:12:31.180 "uuid": "5c8ce887-e35a-44d4-b26d-f31caa7f39b4", 00:12:31.180 "is_configured": true, 00:12:31.180 "data_offset": 2048, 00:12:31.180 "data_size": 63488 00:12:31.180 }, 00:12:31.180 { 00:12:31.180 "name": "BaseBdev3", 00:12:31.180 "uuid": "ff2b1877-1f7b-4f5d-bb39-fa7043dc4aa9", 00:12:31.180 "is_configured": true, 00:12:31.180 "data_offset": 2048, 00:12:31.180 "data_size": 63488 00:12:31.180 }, 00:12:31.180 { 00:12:31.180 "name": "BaseBdev4", 00:12:31.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.180 "is_configured": false, 00:12:31.180 "data_offset": 0, 00:12:31.180 "data_size": 0 00:12:31.180 } 00:12:31.180 ] 00:12:31.180 }' 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.180 20:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.748 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:31.748 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.748 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.748 BaseBdev4 00:12:31.748 [2024-10-17 20:09:17.257221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:31.749 [2024-10-17 20:09:17.257560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:31.749 [2024-10-17 20:09:17.257580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:31.749 [2024-10-17 20:09:17.257908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:31.749 [2024-10-17 20:09:17.258150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:31.749 [2024-10-17 20:09:17.258192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:31.749 [2024-10-17 20:09:17.258380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.749 [ 00:12:31.749 { 00:12:31.749 "name": "BaseBdev4", 00:12:31.749 "aliases": [ 00:12:31.749 "c981f6e4-6148-4288-be98-e0b5004d7a32" 00:12:31.749 ], 00:12:31.749 "product_name": "Malloc disk", 00:12:31.749 "block_size": 512, 00:12:31.749 "num_blocks": 65536, 00:12:31.749 "uuid": "c981f6e4-6148-4288-be98-e0b5004d7a32", 00:12:31.749 "assigned_rate_limits": { 00:12:31.749 "rw_ios_per_sec": 0, 00:12:31.749 "rw_mbytes_per_sec": 0, 00:12:31.749 "r_mbytes_per_sec": 0, 00:12:31.749 "w_mbytes_per_sec": 0 00:12:31.749 }, 00:12:31.749 "claimed": true, 00:12:31.749 "claim_type": "exclusive_write", 00:12:31.749 "zoned": false, 00:12:31.749 "supported_io_types": { 00:12:31.749 "read": true, 00:12:31.749 "write": true, 00:12:31.749 "unmap": true, 00:12:31.749 "flush": true, 00:12:31.749 "reset": true, 00:12:31.749 "nvme_admin": false, 00:12:31.749 "nvme_io": false, 00:12:31.749 "nvme_io_md": false, 00:12:31.749 "write_zeroes": true, 00:12:31.749 "zcopy": true, 00:12:31.749 "get_zone_info": false, 00:12:31.749 "zone_management": false, 00:12:31.749 "zone_append": false, 00:12:31.749 "compare": false, 00:12:31.749 "compare_and_write": false, 00:12:31.749 "abort": true, 00:12:31.749 "seek_hole": false, 00:12:31.749 "seek_data": false, 00:12:31.749 "copy": true, 00:12:31.749 "nvme_iov_md": false 00:12:31.749 }, 00:12:31.749 "memory_domains": [ 00:12:31.749 { 00:12:31.749 "dma_device_id": "system", 00:12:31.749 "dma_device_type": 1 00:12:31.749 }, 00:12:31.749 { 00:12:31.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.749 "dma_device_type": 2 00:12:31.749 } 00:12:31.749 ], 00:12:31.749 "driver_specific": {} 00:12:31.749 } 00:12:31.749 ] 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.749 "name": "Existed_Raid", 00:12:31.749 "uuid": "a3b8e0a8-4ae3-4159-9a69-dfd35e20261d", 00:12:31.749 "strip_size_kb": 64, 00:12:31.749 "state": "online", 00:12:31.749 "raid_level": "raid0", 00:12:31.749 "superblock": true, 00:12:31.749 "num_base_bdevs": 4, 00:12:31.749 "num_base_bdevs_discovered": 4, 00:12:31.749 "num_base_bdevs_operational": 4, 00:12:31.749 "base_bdevs_list": [ 00:12:31.749 { 00:12:31.749 "name": "BaseBdev1", 00:12:31.749 "uuid": "805bcd08-871c-431b-a0b1-3852a27abfa6", 00:12:31.749 "is_configured": true, 00:12:31.749 "data_offset": 2048, 00:12:31.749 "data_size": 63488 00:12:31.749 }, 00:12:31.749 { 00:12:31.749 "name": "BaseBdev2", 00:12:31.749 "uuid": "5c8ce887-e35a-44d4-b26d-f31caa7f39b4", 00:12:31.749 "is_configured": true, 00:12:31.749 "data_offset": 2048, 00:12:31.749 "data_size": 63488 00:12:31.749 }, 00:12:31.749 { 00:12:31.749 "name": "BaseBdev3", 00:12:31.749 "uuid": "ff2b1877-1f7b-4f5d-bb39-fa7043dc4aa9", 00:12:31.749 "is_configured": true, 00:12:31.749 "data_offset": 2048, 00:12:31.749 "data_size": 63488 00:12:31.749 }, 00:12:31.749 { 00:12:31.749 "name": "BaseBdev4", 00:12:31.749 "uuid": "c981f6e4-6148-4288-be98-e0b5004d7a32", 00:12:31.749 "is_configured": true, 00:12:31.749 "data_offset": 2048, 00:12:31.749 "data_size": 63488 00:12:31.749 } 00:12:31.749 ] 00:12:31.749 }' 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.749 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.316 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:32.316 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:32.316 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:32.316 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:32.316 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:32.316 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:32.316 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:32.316 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.316 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:32.316 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.316 [2024-10-17 20:09:17.813916] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.316 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.316 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:32.316 "name": "Existed_Raid", 00:12:32.316 "aliases": [ 00:12:32.316 "a3b8e0a8-4ae3-4159-9a69-dfd35e20261d" 00:12:32.316 ], 00:12:32.316 "product_name": "Raid Volume", 00:12:32.316 "block_size": 512, 00:12:32.316 "num_blocks": 253952, 00:12:32.316 "uuid": "a3b8e0a8-4ae3-4159-9a69-dfd35e20261d", 00:12:32.316 "assigned_rate_limits": { 00:12:32.316 "rw_ios_per_sec": 0, 00:12:32.316 "rw_mbytes_per_sec": 0, 00:12:32.316 "r_mbytes_per_sec": 0, 00:12:32.316 "w_mbytes_per_sec": 0 00:12:32.316 }, 00:12:32.316 "claimed": false, 00:12:32.316 "zoned": false, 00:12:32.316 "supported_io_types": { 00:12:32.316 "read": true, 00:12:32.316 "write": true, 00:12:32.316 "unmap": true, 00:12:32.316 "flush": true, 00:12:32.316 "reset": true, 00:12:32.316 "nvme_admin": false, 00:12:32.316 "nvme_io": false, 00:12:32.316 "nvme_io_md": false, 00:12:32.316 "write_zeroes": true, 00:12:32.316 "zcopy": false, 00:12:32.316 "get_zone_info": false, 00:12:32.316 "zone_management": false, 00:12:32.316 "zone_append": false, 00:12:32.316 "compare": false, 00:12:32.316 "compare_and_write": false, 00:12:32.316 "abort": false, 00:12:32.316 "seek_hole": false, 00:12:32.316 "seek_data": false, 00:12:32.316 "copy": false, 00:12:32.316 "nvme_iov_md": false 00:12:32.316 }, 00:12:32.316 "memory_domains": [ 00:12:32.316 { 00:12:32.316 "dma_device_id": "system", 00:12:32.316 "dma_device_type": 1 00:12:32.316 }, 00:12:32.316 { 00:12:32.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.316 "dma_device_type": 2 00:12:32.316 }, 00:12:32.316 { 00:12:32.316 "dma_device_id": "system", 00:12:32.316 "dma_device_type": 1 00:12:32.316 }, 00:12:32.316 { 00:12:32.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.316 "dma_device_type": 2 00:12:32.316 }, 00:12:32.316 { 00:12:32.316 "dma_device_id": "system", 00:12:32.316 "dma_device_type": 1 00:12:32.316 }, 00:12:32.316 { 00:12:32.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.316 "dma_device_type": 2 00:12:32.316 }, 00:12:32.316 { 00:12:32.316 "dma_device_id": "system", 00:12:32.316 "dma_device_type": 1 00:12:32.316 }, 00:12:32.316 { 00:12:32.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.316 "dma_device_type": 2 00:12:32.316 } 00:12:32.316 ], 00:12:32.316 "driver_specific": { 00:12:32.316 "raid": { 00:12:32.316 "uuid": "a3b8e0a8-4ae3-4159-9a69-dfd35e20261d", 00:12:32.316 "strip_size_kb": 64, 00:12:32.316 "state": "online", 00:12:32.316 "raid_level": "raid0", 00:12:32.316 "superblock": true, 00:12:32.316 "num_base_bdevs": 4, 00:12:32.316 "num_base_bdevs_discovered": 4, 00:12:32.316 "num_base_bdevs_operational": 4, 00:12:32.316 "base_bdevs_list": [ 00:12:32.316 { 00:12:32.316 "name": "BaseBdev1", 00:12:32.316 "uuid": "805bcd08-871c-431b-a0b1-3852a27abfa6", 00:12:32.316 "is_configured": true, 00:12:32.316 "data_offset": 2048, 00:12:32.316 "data_size": 63488 00:12:32.316 }, 00:12:32.316 { 00:12:32.316 "name": "BaseBdev2", 00:12:32.316 "uuid": "5c8ce887-e35a-44d4-b26d-f31caa7f39b4", 00:12:32.316 "is_configured": true, 00:12:32.316 "data_offset": 2048, 00:12:32.316 "data_size": 63488 00:12:32.316 }, 00:12:32.316 { 00:12:32.316 "name": "BaseBdev3", 00:12:32.316 "uuid": "ff2b1877-1f7b-4f5d-bb39-fa7043dc4aa9", 00:12:32.317 "is_configured": true, 00:12:32.317 "data_offset": 2048, 00:12:32.317 "data_size": 63488 00:12:32.317 }, 00:12:32.317 { 00:12:32.317 "name": "BaseBdev4", 00:12:32.317 "uuid": "c981f6e4-6148-4288-be98-e0b5004d7a32", 00:12:32.317 "is_configured": true, 00:12:32.317 "data_offset": 2048, 00:12:32.317 "data_size": 63488 00:12:32.317 } 00:12:32.317 ] 00:12:32.317 } 00:12:32.317 } 00:12:32.317 }' 00:12:32.317 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:32.317 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:32.317 BaseBdev2 00:12:32.317 BaseBdev3 00:12:32.317 BaseBdev4' 00:12:32.317 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.317 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:32.317 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.317 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:32.317 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.575 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.576 20:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.576 20:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.576 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.576 [2024-10-17 20:09:18.197876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:32.576 [2024-10-17 20:09:18.197917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.576 [2024-10-17 20:09:18.197985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.835 "name": "Existed_Raid", 00:12:32.835 "uuid": "a3b8e0a8-4ae3-4159-9a69-dfd35e20261d", 00:12:32.835 "strip_size_kb": 64, 00:12:32.835 "state": "offline", 00:12:32.835 "raid_level": "raid0", 00:12:32.835 "superblock": true, 00:12:32.835 "num_base_bdevs": 4, 00:12:32.835 "num_base_bdevs_discovered": 3, 00:12:32.835 "num_base_bdevs_operational": 3, 00:12:32.835 "base_bdevs_list": [ 00:12:32.835 { 00:12:32.835 "name": null, 00:12:32.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.835 "is_configured": false, 00:12:32.835 "data_offset": 0, 00:12:32.835 "data_size": 63488 00:12:32.835 }, 00:12:32.835 { 00:12:32.835 "name": "BaseBdev2", 00:12:32.835 "uuid": "5c8ce887-e35a-44d4-b26d-f31caa7f39b4", 00:12:32.835 "is_configured": true, 00:12:32.835 "data_offset": 2048, 00:12:32.835 "data_size": 63488 00:12:32.835 }, 00:12:32.835 { 00:12:32.835 "name": "BaseBdev3", 00:12:32.835 "uuid": "ff2b1877-1f7b-4f5d-bb39-fa7043dc4aa9", 00:12:32.835 "is_configured": true, 00:12:32.835 "data_offset": 2048, 00:12:32.835 "data_size": 63488 00:12:32.835 }, 00:12:32.835 { 00:12:32.835 "name": "BaseBdev4", 00:12:32.835 "uuid": "c981f6e4-6148-4288-be98-e0b5004d7a32", 00:12:32.835 "is_configured": true, 00:12:32.835 "data_offset": 2048, 00:12:32.835 "data_size": 63488 00:12:32.835 } 00:12:32.835 ] 00:12:32.835 }' 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.835 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.426 [2024-10-17 20:09:18.869236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.426 20:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.426 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:33.426 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:33.426 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:33.426 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.426 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.426 [2024-10-17 20:09:19.009613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.687 [2024-10-17 20:09:19.145747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:33.687 [2024-10-17 20:09:19.145956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.687 BaseBdev2 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.687 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.947 [ 00:12:33.947 { 00:12:33.947 "name": "BaseBdev2", 00:12:33.947 "aliases": [ 00:12:33.947 "3657a548-c79f-43de-804b-0dce33f96bf4" 00:12:33.947 ], 00:12:33.947 "product_name": "Malloc disk", 00:12:33.947 "block_size": 512, 00:12:33.947 "num_blocks": 65536, 00:12:33.947 "uuid": "3657a548-c79f-43de-804b-0dce33f96bf4", 00:12:33.947 "assigned_rate_limits": { 00:12:33.947 "rw_ios_per_sec": 0, 00:12:33.947 "rw_mbytes_per_sec": 0, 00:12:33.947 "r_mbytes_per_sec": 0, 00:12:33.947 "w_mbytes_per_sec": 0 00:12:33.947 }, 00:12:33.947 "claimed": false, 00:12:33.947 "zoned": false, 00:12:33.947 "supported_io_types": { 00:12:33.947 "read": true, 00:12:33.947 "write": true, 00:12:33.947 "unmap": true, 00:12:33.947 "flush": true, 00:12:33.947 "reset": true, 00:12:33.947 "nvme_admin": false, 00:12:33.947 "nvme_io": false, 00:12:33.947 "nvme_io_md": false, 00:12:33.947 "write_zeroes": true, 00:12:33.947 "zcopy": true, 00:12:33.947 "get_zone_info": false, 00:12:33.947 "zone_management": false, 00:12:33.947 "zone_append": false, 00:12:33.947 "compare": false, 00:12:33.947 "compare_and_write": false, 00:12:33.947 "abort": true, 00:12:33.947 "seek_hole": false, 00:12:33.947 "seek_data": false, 00:12:33.947 "copy": true, 00:12:33.947 "nvme_iov_md": false 00:12:33.947 }, 00:12:33.947 "memory_domains": [ 00:12:33.947 { 00:12:33.947 "dma_device_id": "system", 00:12:33.947 "dma_device_type": 1 00:12:33.947 }, 00:12:33.947 { 00:12:33.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.947 "dma_device_type": 2 00:12:33.947 } 00:12:33.947 ], 00:12:33.947 "driver_specific": {} 00:12:33.947 } 00:12:33.947 ] 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.947 BaseBdev3 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.947 [ 00:12:33.947 { 00:12:33.947 "name": "BaseBdev3", 00:12:33.947 "aliases": [ 00:12:33.947 "163c4c95-06c3-4235-8925-c8f7c5b9967a" 00:12:33.947 ], 00:12:33.947 "product_name": "Malloc disk", 00:12:33.947 "block_size": 512, 00:12:33.947 "num_blocks": 65536, 00:12:33.947 "uuid": "163c4c95-06c3-4235-8925-c8f7c5b9967a", 00:12:33.947 "assigned_rate_limits": { 00:12:33.947 "rw_ios_per_sec": 0, 00:12:33.947 "rw_mbytes_per_sec": 0, 00:12:33.947 "r_mbytes_per_sec": 0, 00:12:33.947 "w_mbytes_per_sec": 0 00:12:33.947 }, 00:12:33.947 "claimed": false, 00:12:33.947 "zoned": false, 00:12:33.947 "supported_io_types": { 00:12:33.947 "read": true, 00:12:33.947 "write": true, 00:12:33.947 "unmap": true, 00:12:33.947 "flush": true, 00:12:33.947 "reset": true, 00:12:33.947 "nvme_admin": false, 00:12:33.947 "nvme_io": false, 00:12:33.947 "nvme_io_md": false, 00:12:33.947 "write_zeroes": true, 00:12:33.947 "zcopy": true, 00:12:33.947 "get_zone_info": false, 00:12:33.947 "zone_management": false, 00:12:33.947 "zone_append": false, 00:12:33.947 "compare": false, 00:12:33.947 "compare_and_write": false, 00:12:33.947 "abort": true, 00:12:33.947 "seek_hole": false, 00:12:33.947 "seek_data": false, 00:12:33.947 "copy": true, 00:12:33.947 "nvme_iov_md": false 00:12:33.947 }, 00:12:33.947 "memory_domains": [ 00:12:33.947 { 00:12:33.947 "dma_device_id": "system", 00:12:33.947 "dma_device_type": 1 00:12:33.947 }, 00:12:33.947 { 00:12:33.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.947 "dma_device_type": 2 00:12:33.947 } 00:12:33.947 ], 00:12:33.947 "driver_specific": {} 00:12:33.947 } 00:12:33.947 ] 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.947 BaseBdev4 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:33.947 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.948 [ 00:12:33.948 { 00:12:33.948 "name": "BaseBdev4", 00:12:33.948 "aliases": [ 00:12:33.948 "74357438-ee37-44f1-b687-89031d2e82e3" 00:12:33.948 ], 00:12:33.948 "product_name": "Malloc disk", 00:12:33.948 "block_size": 512, 00:12:33.948 "num_blocks": 65536, 00:12:33.948 "uuid": "74357438-ee37-44f1-b687-89031d2e82e3", 00:12:33.948 "assigned_rate_limits": { 00:12:33.948 "rw_ios_per_sec": 0, 00:12:33.948 "rw_mbytes_per_sec": 0, 00:12:33.948 "r_mbytes_per_sec": 0, 00:12:33.948 "w_mbytes_per_sec": 0 00:12:33.948 }, 00:12:33.948 "claimed": false, 00:12:33.948 "zoned": false, 00:12:33.948 "supported_io_types": { 00:12:33.948 "read": true, 00:12:33.948 "write": true, 00:12:33.948 "unmap": true, 00:12:33.948 "flush": true, 00:12:33.948 "reset": true, 00:12:33.948 "nvme_admin": false, 00:12:33.948 "nvme_io": false, 00:12:33.948 "nvme_io_md": false, 00:12:33.948 "write_zeroes": true, 00:12:33.948 "zcopy": true, 00:12:33.948 "get_zone_info": false, 00:12:33.948 "zone_management": false, 00:12:33.948 "zone_append": false, 00:12:33.948 "compare": false, 00:12:33.948 "compare_and_write": false, 00:12:33.948 "abort": true, 00:12:33.948 "seek_hole": false, 00:12:33.948 "seek_data": false, 00:12:33.948 "copy": true, 00:12:33.948 "nvme_iov_md": false 00:12:33.948 }, 00:12:33.948 "memory_domains": [ 00:12:33.948 { 00:12:33.948 "dma_device_id": "system", 00:12:33.948 "dma_device_type": 1 00:12:33.948 }, 00:12:33.948 { 00:12:33.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.948 "dma_device_type": 2 00:12:33.948 } 00:12:33.948 ], 00:12:33.948 "driver_specific": {} 00:12:33.948 } 00:12:33.948 ] 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.948 [2024-10-17 20:09:19.507060] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:33.948 [2024-10-17 20:09:19.507250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:33.948 [2024-10-17 20:09:19.507419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.948 [2024-10-17 20:09:19.509964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.948 [2024-10-17 20:09:19.510192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.948 "name": "Existed_Raid", 00:12:33.948 "uuid": "3d66f752-4f0f-4e2f-8216-61df06c74822", 00:12:33.948 "strip_size_kb": 64, 00:12:33.948 "state": "configuring", 00:12:33.948 "raid_level": "raid0", 00:12:33.948 "superblock": true, 00:12:33.948 "num_base_bdevs": 4, 00:12:33.948 "num_base_bdevs_discovered": 3, 00:12:33.948 "num_base_bdevs_operational": 4, 00:12:33.948 "base_bdevs_list": [ 00:12:33.948 { 00:12:33.948 "name": "BaseBdev1", 00:12:33.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.948 "is_configured": false, 00:12:33.948 "data_offset": 0, 00:12:33.948 "data_size": 0 00:12:33.948 }, 00:12:33.948 { 00:12:33.948 "name": "BaseBdev2", 00:12:33.948 "uuid": "3657a548-c79f-43de-804b-0dce33f96bf4", 00:12:33.948 "is_configured": true, 00:12:33.948 "data_offset": 2048, 00:12:33.948 "data_size": 63488 00:12:33.948 }, 00:12:33.948 { 00:12:33.948 "name": "BaseBdev3", 00:12:33.948 "uuid": "163c4c95-06c3-4235-8925-c8f7c5b9967a", 00:12:33.948 "is_configured": true, 00:12:33.948 "data_offset": 2048, 00:12:33.948 "data_size": 63488 00:12:33.948 }, 00:12:33.948 { 00:12:33.948 "name": "BaseBdev4", 00:12:33.948 "uuid": "74357438-ee37-44f1-b687-89031d2e82e3", 00:12:33.948 "is_configured": true, 00:12:33.948 "data_offset": 2048, 00:12:33.948 "data_size": 63488 00:12:33.948 } 00:12:33.948 ] 00:12:33.948 }' 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.948 20:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.516 [2024-10-17 20:09:20.051278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.516 "name": "Existed_Raid", 00:12:34.516 "uuid": "3d66f752-4f0f-4e2f-8216-61df06c74822", 00:12:34.516 "strip_size_kb": 64, 00:12:34.516 "state": "configuring", 00:12:34.516 "raid_level": "raid0", 00:12:34.516 "superblock": true, 00:12:34.516 "num_base_bdevs": 4, 00:12:34.516 "num_base_bdevs_discovered": 2, 00:12:34.516 "num_base_bdevs_operational": 4, 00:12:34.516 "base_bdevs_list": [ 00:12:34.516 { 00:12:34.516 "name": "BaseBdev1", 00:12:34.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.516 "is_configured": false, 00:12:34.516 "data_offset": 0, 00:12:34.516 "data_size": 0 00:12:34.516 }, 00:12:34.516 { 00:12:34.516 "name": null, 00:12:34.516 "uuid": "3657a548-c79f-43de-804b-0dce33f96bf4", 00:12:34.516 "is_configured": false, 00:12:34.516 "data_offset": 0, 00:12:34.516 "data_size": 63488 00:12:34.516 }, 00:12:34.516 { 00:12:34.516 "name": "BaseBdev3", 00:12:34.516 "uuid": "163c4c95-06c3-4235-8925-c8f7c5b9967a", 00:12:34.516 "is_configured": true, 00:12:34.516 "data_offset": 2048, 00:12:34.516 "data_size": 63488 00:12:34.516 }, 00:12:34.516 { 00:12:34.516 "name": "BaseBdev4", 00:12:34.516 "uuid": "74357438-ee37-44f1-b687-89031d2e82e3", 00:12:34.516 "is_configured": true, 00:12:34.516 "data_offset": 2048, 00:12:34.516 "data_size": 63488 00:12:34.516 } 00:12:34.516 ] 00:12:34.516 }' 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.516 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.084 [2024-10-17 20:09:20.675502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.084 BaseBdev1 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.084 [ 00:12:35.084 { 00:12:35.084 "name": "BaseBdev1", 00:12:35.084 "aliases": [ 00:12:35.084 "4a2e217c-2874-46b8-a200-327191ad79e2" 00:12:35.084 ], 00:12:35.084 "product_name": "Malloc disk", 00:12:35.084 "block_size": 512, 00:12:35.084 "num_blocks": 65536, 00:12:35.084 "uuid": "4a2e217c-2874-46b8-a200-327191ad79e2", 00:12:35.084 "assigned_rate_limits": { 00:12:35.084 "rw_ios_per_sec": 0, 00:12:35.084 "rw_mbytes_per_sec": 0, 00:12:35.084 "r_mbytes_per_sec": 0, 00:12:35.084 "w_mbytes_per_sec": 0 00:12:35.084 }, 00:12:35.084 "claimed": true, 00:12:35.084 "claim_type": "exclusive_write", 00:12:35.084 "zoned": false, 00:12:35.084 "supported_io_types": { 00:12:35.084 "read": true, 00:12:35.084 "write": true, 00:12:35.084 "unmap": true, 00:12:35.084 "flush": true, 00:12:35.084 "reset": true, 00:12:35.084 "nvme_admin": false, 00:12:35.084 "nvme_io": false, 00:12:35.084 "nvme_io_md": false, 00:12:35.084 "write_zeroes": true, 00:12:35.084 "zcopy": true, 00:12:35.084 "get_zone_info": false, 00:12:35.084 "zone_management": false, 00:12:35.084 "zone_append": false, 00:12:35.084 "compare": false, 00:12:35.084 "compare_and_write": false, 00:12:35.084 "abort": true, 00:12:35.084 "seek_hole": false, 00:12:35.084 "seek_data": false, 00:12:35.084 "copy": true, 00:12:35.084 "nvme_iov_md": false 00:12:35.084 }, 00:12:35.084 "memory_domains": [ 00:12:35.084 { 00:12:35.084 "dma_device_id": "system", 00:12:35.084 "dma_device_type": 1 00:12:35.084 }, 00:12:35.084 { 00:12:35.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.084 "dma_device_type": 2 00:12:35.084 } 00:12:35.084 ], 00:12:35.084 "driver_specific": {} 00:12:35.084 } 00:12:35.084 ] 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.084 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.342 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.342 "name": "Existed_Raid", 00:12:35.342 "uuid": "3d66f752-4f0f-4e2f-8216-61df06c74822", 00:12:35.342 "strip_size_kb": 64, 00:12:35.342 "state": "configuring", 00:12:35.342 "raid_level": "raid0", 00:12:35.342 "superblock": true, 00:12:35.342 "num_base_bdevs": 4, 00:12:35.342 "num_base_bdevs_discovered": 3, 00:12:35.342 "num_base_bdevs_operational": 4, 00:12:35.342 "base_bdevs_list": [ 00:12:35.342 { 00:12:35.342 "name": "BaseBdev1", 00:12:35.342 "uuid": "4a2e217c-2874-46b8-a200-327191ad79e2", 00:12:35.342 "is_configured": true, 00:12:35.342 "data_offset": 2048, 00:12:35.342 "data_size": 63488 00:12:35.342 }, 00:12:35.342 { 00:12:35.342 "name": null, 00:12:35.342 "uuid": "3657a548-c79f-43de-804b-0dce33f96bf4", 00:12:35.342 "is_configured": false, 00:12:35.342 "data_offset": 0, 00:12:35.342 "data_size": 63488 00:12:35.342 }, 00:12:35.342 { 00:12:35.342 "name": "BaseBdev3", 00:12:35.342 "uuid": "163c4c95-06c3-4235-8925-c8f7c5b9967a", 00:12:35.342 "is_configured": true, 00:12:35.342 "data_offset": 2048, 00:12:35.342 "data_size": 63488 00:12:35.342 }, 00:12:35.342 { 00:12:35.342 "name": "BaseBdev4", 00:12:35.342 "uuid": "74357438-ee37-44f1-b687-89031d2e82e3", 00:12:35.342 "is_configured": true, 00:12:35.342 "data_offset": 2048, 00:12:35.342 "data_size": 63488 00:12:35.342 } 00:12:35.342 ] 00:12:35.342 }' 00:12:35.342 20:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.342 20:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.909 [2024-10-17 20:09:21.331822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.909 "name": "Existed_Raid", 00:12:35.909 "uuid": "3d66f752-4f0f-4e2f-8216-61df06c74822", 00:12:35.909 "strip_size_kb": 64, 00:12:35.909 "state": "configuring", 00:12:35.909 "raid_level": "raid0", 00:12:35.909 "superblock": true, 00:12:35.909 "num_base_bdevs": 4, 00:12:35.909 "num_base_bdevs_discovered": 2, 00:12:35.909 "num_base_bdevs_operational": 4, 00:12:35.909 "base_bdevs_list": [ 00:12:35.909 { 00:12:35.909 "name": "BaseBdev1", 00:12:35.909 "uuid": "4a2e217c-2874-46b8-a200-327191ad79e2", 00:12:35.909 "is_configured": true, 00:12:35.909 "data_offset": 2048, 00:12:35.909 "data_size": 63488 00:12:35.909 }, 00:12:35.909 { 00:12:35.909 "name": null, 00:12:35.909 "uuid": "3657a548-c79f-43de-804b-0dce33f96bf4", 00:12:35.909 "is_configured": false, 00:12:35.909 "data_offset": 0, 00:12:35.909 "data_size": 63488 00:12:35.909 }, 00:12:35.909 { 00:12:35.909 "name": null, 00:12:35.909 "uuid": "163c4c95-06c3-4235-8925-c8f7c5b9967a", 00:12:35.909 "is_configured": false, 00:12:35.909 "data_offset": 0, 00:12:35.909 "data_size": 63488 00:12:35.909 }, 00:12:35.909 { 00:12:35.909 "name": "BaseBdev4", 00:12:35.909 "uuid": "74357438-ee37-44f1-b687-89031d2e82e3", 00:12:35.909 "is_configured": true, 00:12:35.909 "data_offset": 2048, 00:12:35.909 "data_size": 63488 00:12:35.909 } 00:12:35.909 ] 00:12:35.909 }' 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.909 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.477 [2024-10-17 20:09:21.911946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.477 "name": "Existed_Raid", 00:12:36.477 "uuid": "3d66f752-4f0f-4e2f-8216-61df06c74822", 00:12:36.477 "strip_size_kb": 64, 00:12:36.477 "state": "configuring", 00:12:36.477 "raid_level": "raid0", 00:12:36.477 "superblock": true, 00:12:36.477 "num_base_bdevs": 4, 00:12:36.477 "num_base_bdevs_discovered": 3, 00:12:36.477 "num_base_bdevs_operational": 4, 00:12:36.477 "base_bdevs_list": [ 00:12:36.477 { 00:12:36.477 "name": "BaseBdev1", 00:12:36.477 "uuid": "4a2e217c-2874-46b8-a200-327191ad79e2", 00:12:36.477 "is_configured": true, 00:12:36.477 "data_offset": 2048, 00:12:36.477 "data_size": 63488 00:12:36.477 }, 00:12:36.477 { 00:12:36.477 "name": null, 00:12:36.477 "uuid": "3657a548-c79f-43de-804b-0dce33f96bf4", 00:12:36.477 "is_configured": false, 00:12:36.477 "data_offset": 0, 00:12:36.477 "data_size": 63488 00:12:36.477 }, 00:12:36.477 { 00:12:36.477 "name": "BaseBdev3", 00:12:36.477 "uuid": "163c4c95-06c3-4235-8925-c8f7c5b9967a", 00:12:36.477 "is_configured": true, 00:12:36.477 "data_offset": 2048, 00:12:36.477 "data_size": 63488 00:12:36.477 }, 00:12:36.477 { 00:12:36.477 "name": "BaseBdev4", 00:12:36.477 "uuid": "74357438-ee37-44f1-b687-89031d2e82e3", 00:12:36.477 "is_configured": true, 00:12:36.477 "data_offset": 2048, 00:12:36.477 "data_size": 63488 00:12:36.477 } 00:12:36.477 ] 00:12:36.477 }' 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.477 20:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.052 [2024-10-17 20:09:22.512220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.052 "name": "Existed_Raid", 00:12:37.052 "uuid": "3d66f752-4f0f-4e2f-8216-61df06c74822", 00:12:37.052 "strip_size_kb": 64, 00:12:37.052 "state": "configuring", 00:12:37.052 "raid_level": "raid0", 00:12:37.052 "superblock": true, 00:12:37.052 "num_base_bdevs": 4, 00:12:37.052 "num_base_bdevs_discovered": 2, 00:12:37.052 "num_base_bdevs_operational": 4, 00:12:37.052 "base_bdevs_list": [ 00:12:37.052 { 00:12:37.052 "name": null, 00:12:37.052 "uuid": "4a2e217c-2874-46b8-a200-327191ad79e2", 00:12:37.052 "is_configured": false, 00:12:37.052 "data_offset": 0, 00:12:37.052 "data_size": 63488 00:12:37.052 }, 00:12:37.052 { 00:12:37.052 "name": null, 00:12:37.052 "uuid": "3657a548-c79f-43de-804b-0dce33f96bf4", 00:12:37.052 "is_configured": false, 00:12:37.052 "data_offset": 0, 00:12:37.052 "data_size": 63488 00:12:37.052 }, 00:12:37.052 { 00:12:37.052 "name": "BaseBdev3", 00:12:37.052 "uuid": "163c4c95-06c3-4235-8925-c8f7c5b9967a", 00:12:37.052 "is_configured": true, 00:12:37.052 "data_offset": 2048, 00:12:37.052 "data_size": 63488 00:12:37.052 }, 00:12:37.052 { 00:12:37.052 "name": "BaseBdev4", 00:12:37.052 "uuid": "74357438-ee37-44f1-b687-89031d2e82e3", 00:12:37.052 "is_configured": true, 00:12:37.052 "data_offset": 2048, 00:12:37.052 "data_size": 63488 00:12:37.052 } 00:12:37.052 ] 00:12:37.052 }' 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.052 20:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.619 [2024-10-17 20:09:23.231722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.619 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.620 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.620 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.620 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.620 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.620 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.878 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.878 "name": "Existed_Raid", 00:12:37.878 "uuid": "3d66f752-4f0f-4e2f-8216-61df06c74822", 00:12:37.878 "strip_size_kb": 64, 00:12:37.878 "state": "configuring", 00:12:37.878 "raid_level": "raid0", 00:12:37.878 "superblock": true, 00:12:37.878 "num_base_bdevs": 4, 00:12:37.878 "num_base_bdevs_discovered": 3, 00:12:37.878 "num_base_bdevs_operational": 4, 00:12:37.878 "base_bdevs_list": [ 00:12:37.878 { 00:12:37.878 "name": null, 00:12:37.878 "uuid": "4a2e217c-2874-46b8-a200-327191ad79e2", 00:12:37.878 "is_configured": false, 00:12:37.878 "data_offset": 0, 00:12:37.878 "data_size": 63488 00:12:37.878 }, 00:12:37.878 { 00:12:37.878 "name": "BaseBdev2", 00:12:37.878 "uuid": "3657a548-c79f-43de-804b-0dce33f96bf4", 00:12:37.878 "is_configured": true, 00:12:37.878 "data_offset": 2048, 00:12:37.878 "data_size": 63488 00:12:37.878 }, 00:12:37.878 { 00:12:37.878 "name": "BaseBdev3", 00:12:37.878 "uuid": "163c4c95-06c3-4235-8925-c8f7c5b9967a", 00:12:37.878 "is_configured": true, 00:12:37.878 "data_offset": 2048, 00:12:37.878 "data_size": 63488 00:12:37.878 }, 00:12:37.878 { 00:12:37.878 "name": "BaseBdev4", 00:12:37.878 "uuid": "74357438-ee37-44f1-b687-89031d2e82e3", 00:12:37.878 "is_configured": true, 00:12:37.878 "data_offset": 2048, 00:12:37.878 "data_size": 63488 00:12:37.878 } 00:12:37.878 ] 00:12:37.878 }' 00:12:37.878 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.879 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.137 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.137 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.137 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:38.137 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4a2e217c-2874-46b8-a200-327191ad79e2 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.395 NewBaseBdev 00:12:38.395 [2024-10-17 20:09:23.930193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:38.395 [2024-10-17 20:09:23.930534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:38.395 [2024-10-17 20:09:23.930552] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:38.395 [2024-10-17 20:09:23.930856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:38.395 [2024-10-17 20:09:23.931020] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:38.395 [2024-10-17 20:09:23.931041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:38.395 [2024-10-17 20:09:23.931260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.395 [ 00:12:38.395 { 00:12:38.395 "name": "NewBaseBdev", 00:12:38.395 "aliases": [ 00:12:38.395 "4a2e217c-2874-46b8-a200-327191ad79e2" 00:12:38.395 ], 00:12:38.395 "product_name": "Malloc disk", 00:12:38.395 "block_size": 512, 00:12:38.395 "num_blocks": 65536, 00:12:38.395 "uuid": "4a2e217c-2874-46b8-a200-327191ad79e2", 00:12:38.395 "assigned_rate_limits": { 00:12:38.395 "rw_ios_per_sec": 0, 00:12:38.395 "rw_mbytes_per_sec": 0, 00:12:38.395 "r_mbytes_per_sec": 0, 00:12:38.395 "w_mbytes_per_sec": 0 00:12:38.395 }, 00:12:38.395 "claimed": true, 00:12:38.395 "claim_type": "exclusive_write", 00:12:38.395 "zoned": false, 00:12:38.395 "supported_io_types": { 00:12:38.395 "read": true, 00:12:38.395 "write": true, 00:12:38.395 "unmap": true, 00:12:38.395 "flush": true, 00:12:38.395 "reset": true, 00:12:38.395 "nvme_admin": false, 00:12:38.395 "nvme_io": false, 00:12:38.395 "nvme_io_md": false, 00:12:38.395 "write_zeroes": true, 00:12:38.395 "zcopy": true, 00:12:38.395 "get_zone_info": false, 00:12:38.395 "zone_management": false, 00:12:38.395 "zone_append": false, 00:12:38.395 "compare": false, 00:12:38.395 "compare_and_write": false, 00:12:38.395 "abort": true, 00:12:38.395 "seek_hole": false, 00:12:38.395 "seek_data": false, 00:12:38.395 "copy": true, 00:12:38.395 "nvme_iov_md": false 00:12:38.395 }, 00:12:38.395 "memory_domains": [ 00:12:38.395 { 00:12:38.395 "dma_device_id": "system", 00:12:38.395 "dma_device_type": 1 00:12:38.395 }, 00:12:38.395 { 00:12:38.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.395 "dma_device_type": 2 00:12:38.395 } 00:12:38.395 ], 00:12:38.395 "driver_specific": {} 00:12:38.395 } 00:12:38.395 ] 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.395 20:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.395 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.395 "name": "Existed_Raid", 00:12:38.395 "uuid": "3d66f752-4f0f-4e2f-8216-61df06c74822", 00:12:38.395 "strip_size_kb": 64, 00:12:38.395 "state": "online", 00:12:38.395 "raid_level": "raid0", 00:12:38.395 "superblock": true, 00:12:38.395 "num_base_bdevs": 4, 00:12:38.395 "num_base_bdevs_discovered": 4, 00:12:38.395 "num_base_bdevs_operational": 4, 00:12:38.395 "base_bdevs_list": [ 00:12:38.395 { 00:12:38.395 "name": "NewBaseBdev", 00:12:38.395 "uuid": "4a2e217c-2874-46b8-a200-327191ad79e2", 00:12:38.395 "is_configured": true, 00:12:38.395 "data_offset": 2048, 00:12:38.395 "data_size": 63488 00:12:38.395 }, 00:12:38.395 { 00:12:38.395 "name": "BaseBdev2", 00:12:38.395 "uuid": "3657a548-c79f-43de-804b-0dce33f96bf4", 00:12:38.395 "is_configured": true, 00:12:38.395 "data_offset": 2048, 00:12:38.395 "data_size": 63488 00:12:38.395 }, 00:12:38.395 { 00:12:38.395 "name": "BaseBdev3", 00:12:38.395 "uuid": "163c4c95-06c3-4235-8925-c8f7c5b9967a", 00:12:38.395 "is_configured": true, 00:12:38.395 "data_offset": 2048, 00:12:38.395 "data_size": 63488 00:12:38.395 }, 00:12:38.395 { 00:12:38.395 "name": "BaseBdev4", 00:12:38.395 "uuid": "74357438-ee37-44f1-b687-89031d2e82e3", 00:12:38.395 "is_configured": true, 00:12:38.395 "data_offset": 2048, 00:12:38.395 "data_size": 63488 00:12:38.395 } 00:12:38.395 ] 00:12:38.395 }' 00:12:38.395 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.395 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.963 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:38.963 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:38.963 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:38.963 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:38.963 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:38.963 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:38.963 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:38.963 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.963 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.963 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:38.963 [2024-10-17 20:09:24.518921] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.963 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.963 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:38.963 "name": "Existed_Raid", 00:12:38.963 "aliases": [ 00:12:38.963 "3d66f752-4f0f-4e2f-8216-61df06c74822" 00:12:38.963 ], 00:12:38.963 "product_name": "Raid Volume", 00:12:38.963 "block_size": 512, 00:12:38.963 "num_blocks": 253952, 00:12:38.963 "uuid": "3d66f752-4f0f-4e2f-8216-61df06c74822", 00:12:38.963 "assigned_rate_limits": { 00:12:38.963 "rw_ios_per_sec": 0, 00:12:38.963 "rw_mbytes_per_sec": 0, 00:12:38.963 "r_mbytes_per_sec": 0, 00:12:38.963 "w_mbytes_per_sec": 0 00:12:38.963 }, 00:12:38.963 "claimed": false, 00:12:38.963 "zoned": false, 00:12:38.963 "supported_io_types": { 00:12:38.963 "read": true, 00:12:38.963 "write": true, 00:12:38.963 "unmap": true, 00:12:38.963 "flush": true, 00:12:38.963 "reset": true, 00:12:38.963 "nvme_admin": false, 00:12:38.963 "nvme_io": false, 00:12:38.963 "nvme_io_md": false, 00:12:38.963 "write_zeroes": true, 00:12:38.963 "zcopy": false, 00:12:38.963 "get_zone_info": false, 00:12:38.963 "zone_management": false, 00:12:38.963 "zone_append": false, 00:12:38.963 "compare": false, 00:12:38.963 "compare_and_write": false, 00:12:38.963 "abort": false, 00:12:38.963 "seek_hole": false, 00:12:38.963 "seek_data": false, 00:12:38.963 "copy": false, 00:12:38.963 "nvme_iov_md": false 00:12:38.963 }, 00:12:38.963 "memory_domains": [ 00:12:38.963 { 00:12:38.963 "dma_device_id": "system", 00:12:38.963 "dma_device_type": 1 00:12:38.963 }, 00:12:38.963 { 00:12:38.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.963 "dma_device_type": 2 00:12:38.963 }, 00:12:38.963 { 00:12:38.963 "dma_device_id": "system", 00:12:38.963 "dma_device_type": 1 00:12:38.963 }, 00:12:38.963 { 00:12:38.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.963 "dma_device_type": 2 00:12:38.963 }, 00:12:38.963 { 00:12:38.963 "dma_device_id": "system", 00:12:38.963 "dma_device_type": 1 00:12:38.963 }, 00:12:38.963 { 00:12:38.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.963 "dma_device_type": 2 00:12:38.963 }, 00:12:38.963 { 00:12:38.963 "dma_device_id": "system", 00:12:38.963 "dma_device_type": 1 00:12:38.963 }, 00:12:38.963 { 00:12:38.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.963 "dma_device_type": 2 00:12:38.963 } 00:12:38.963 ], 00:12:38.963 "driver_specific": { 00:12:38.963 "raid": { 00:12:38.963 "uuid": "3d66f752-4f0f-4e2f-8216-61df06c74822", 00:12:38.963 "strip_size_kb": 64, 00:12:38.963 "state": "online", 00:12:38.963 "raid_level": "raid0", 00:12:38.963 "superblock": true, 00:12:38.963 "num_base_bdevs": 4, 00:12:38.963 "num_base_bdevs_discovered": 4, 00:12:38.963 "num_base_bdevs_operational": 4, 00:12:38.963 "base_bdevs_list": [ 00:12:38.963 { 00:12:38.963 "name": "NewBaseBdev", 00:12:38.963 "uuid": "4a2e217c-2874-46b8-a200-327191ad79e2", 00:12:38.963 "is_configured": true, 00:12:38.963 "data_offset": 2048, 00:12:38.963 "data_size": 63488 00:12:38.963 }, 00:12:38.963 { 00:12:38.963 "name": "BaseBdev2", 00:12:38.963 "uuid": "3657a548-c79f-43de-804b-0dce33f96bf4", 00:12:38.963 "is_configured": true, 00:12:38.963 "data_offset": 2048, 00:12:38.963 "data_size": 63488 00:12:38.963 }, 00:12:38.963 { 00:12:38.963 "name": "BaseBdev3", 00:12:38.963 "uuid": "163c4c95-06c3-4235-8925-c8f7c5b9967a", 00:12:38.963 "is_configured": true, 00:12:38.963 "data_offset": 2048, 00:12:38.963 "data_size": 63488 00:12:38.963 }, 00:12:38.963 { 00:12:38.963 "name": "BaseBdev4", 00:12:38.963 "uuid": "74357438-ee37-44f1-b687-89031d2e82e3", 00:12:38.963 "is_configured": true, 00:12:38.963 "data_offset": 2048, 00:12:38.963 "data_size": 63488 00:12:38.963 } 00:12:38.963 ] 00:12:38.963 } 00:12:38.963 } 00:12:38.963 }' 00:12:38.963 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:39.222 BaseBdev2 00:12:39.222 BaseBdev3 00:12:39.222 BaseBdev4' 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.222 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.481 [2024-10-17 20:09:24.918601] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:39.481 [2024-10-17 20:09:24.918805] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.481 [2024-10-17 20:09:24.919060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.481 [2024-10-17 20:09:24.919263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.481 [2024-10-17 20:09:24.919383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70049 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 70049 ']' 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 70049 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70049 00:12:39.481 killing process with pid 70049 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70049' 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 70049 00:12:39.481 [2024-10-17 20:09:24.956734] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:39.481 20:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 70049 00:12:39.740 [2024-10-17 20:09:25.304511] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.673 20:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:40.673 00:12:40.673 real 0m13.146s 00:12:40.673 user 0m21.874s 00:12:40.673 sys 0m1.888s 00:12:40.673 20:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.932 ************************************ 00:12:40.932 END TEST raid_state_function_test_sb 00:12:40.932 ************************************ 00:12:40.932 20:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.932 20:09:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:40.932 20:09:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:40.932 20:09:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.932 20:09:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.932 ************************************ 00:12:40.932 START TEST raid_superblock_test 00:12:40.932 ************************************ 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70736 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70736 00:12:40.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 70736 ']' 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:40.932 20:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.932 [2024-10-17 20:09:26.496349] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:12:40.932 [2024-10-17 20:09:26.496560] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70736 ] 00:12:41.191 [2024-10-17 20:09:26.673154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.191 [2024-10-17 20:09:26.807364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.449 [2024-10-17 20:09:27.008886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.449 [2024-10-17 20:09:27.008969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.015 malloc1 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.015 [2024-10-17 20:09:27.538443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:42.015 [2024-10-17 20:09:27.538726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.015 [2024-10-17 20:09:27.538809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:42.015 [2024-10-17 20:09:27.539033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.015 [2024-10-17 20:09:27.542062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.015 pt1 00:12:42.015 [2024-10-17 20:09:27.542271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.015 malloc2 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.015 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.015 [2024-10-17 20:09:27.589805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:42.016 [2024-10-17 20:09:27.590077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.016 [2024-10-17 20:09:27.590159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:42.016 [2024-10-17 20:09:27.590183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.016 [2024-10-17 20:09:27.593050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.016 [2024-10-17 20:09:27.593266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:42.016 pt2 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.016 malloc3 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.016 [2024-10-17 20:09:27.653983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:42.016 [2024-10-17 20:09:27.654222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.016 [2024-10-17 20:09:27.654305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:42.016 [2024-10-17 20:09:27.654444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.016 [2024-10-17 20:09:27.657322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.016 [2024-10-17 20:09:27.657509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:42.016 pt3 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.016 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.274 malloc4 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.274 [2024-10-17 20:09:27.711382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:42.274 [2024-10-17 20:09:27.711610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.274 [2024-10-17 20:09:27.711686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:42.274 [2024-10-17 20:09:27.711797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.274 [2024-10-17 20:09:27.714634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.274 [2024-10-17 20:09:27.714804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:42.274 pt4 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.274 [2024-10-17 20:09:27.723612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:42.274 [2024-10-17 20:09:27.726158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:42.274 [2024-10-17 20:09:27.726377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:42.274 [2024-10-17 20:09:27.726546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:42.274 [2024-10-17 20:09:27.726785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:42.274 [2024-10-17 20:09:27.726804] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:42.274 [2024-10-17 20:09:27.727150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:42.274 [2024-10-17 20:09:27.727366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:42.274 [2024-10-17 20:09:27.727417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:42.274 [2024-10-17 20:09:27.727635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.274 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.274 "name": "raid_bdev1", 00:12:42.274 "uuid": "e4f04440-975a-43b4-84f9-fb31d5f686cf", 00:12:42.275 "strip_size_kb": 64, 00:12:42.275 "state": "online", 00:12:42.275 "raid_level": "raid0", 00:12:42.275 "superblock": true, 00:12:42.275 "num_base_bdevs": 4, 00:12:42.275 "num_base_bdevs_discovered": 4, 00:12:42.275 "num_base_bdevs_operational": 4, 00:12:42.275 "base_bdevs_list": [ 00:12:42.275 { 00:12:42.275 "name": "pt1", 00:12:42.275 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.275 "is_configured": true, 00:12:42.275 "data_offset": 2048, 00:12:42.275 "data_size": 63488 00:12:42.275 }, 00:12:42.275 { 00:12:42.275 "name": "pt2", 00:12:42.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.275 "is_configured": true, 00:12:42.275 "data_offset": 2048, 00:12:42.275 "data_size": 63488 00:12:42.275 }, 00:12:42.275 { 00:12:42.275 "name": "pt3", 00:12:42.275 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.275 "is_configured": true, 00:12:42.275 "data_offset": 2048, 00:12:42.275 "data_size": 63488 00:12:42.275 }, 00:12:42.275 { 00:12:42.275 "name": "pt4", 00:12:42.275 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:42.275 "is_configured": true, 00:12:42.275 "data_offset": 2048, 00:12:42.275 "data_size": 63488 00:12:42.275 } 00:12:42.275 ] 00:12:42.275 }' 00:12:42.275 20:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.275 20:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.901 [2024-10-17 20:09:28.252283] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:42.901 "name": "raid_bdev1", 00:12:42.901 "aliases": [ 00:12:42.901 "e4f04440-975a-43b4-84f9-fb31d5f686cf" 00:12:42.901 ], 00:12:42.901 "product_name": "Raid Volume", 00:12:42.901 "block_size": 512, 00:12:42.901 "num_blocks": 253952, 00:12:42.901 "uuid": "e4f04440-975a-43b4-84f9-fb31d5f686cf", 00:12:42.901 "assigned_rate_limits": { 00:12:42.901 "rw_ios_per_sec": 0, 00:12:42.901 "rw_mbytes_per_sec": 0, 00:12:42.901 "r_mbytes_per_sec": 0, 00:12:42.901 "w_mbytes_per_sec": 0 00:12:42.901 }, 00:12:42.901 "claimed": false, 00:12:42.901 "zoned": false, 00:12:42.901 "supported_io_types": { 00:12:42.901 "read": true, 00:12:42.901 "write": true, 00:12:42.901 "unmap": true, 00:12:42.901 "flush": true, 00:12:42.901 "reset": true, 00:12:42.901 "nvme_admin": false, 00:12:42.901 "nvme_io": false, 00:12:42.901 "nvme_io_md": false, 00:12:42.901 "write_zeroes": true, 00:12:42.901 "zcopy": false, 00:12:42.901 "get_zone_info": false, 00:12:42.901 "zone_management": false, 00:12:42.901 "zone_append": false, 00:12:42.901 "compare": false, 00:12:42.901 "compare_and_write": false, 00:12:42.901 "abort": false, 00:12:42.901 "seek_hole": false, 00:12:42.901 "seek_data": false, 00:12:42.901 "copy": false, 00:12:42.901 "nvme_iov_md": false 00:12:42.901 }, 00:12:42.901 "memory_domains": [ 00:12:42.901 { 00:12:42.901 "dma_device_id": "system", 00:12:42.901 "dma_device_type": 1 00:12:42.901 }, 00:12:42.901 { 00:12:42.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.901 "dma_device_type": 2 00:12:42.901 }, 00:12:42.901 { 00:12:42.901 "dma_device_id": "system", 00:12:42.901 "dma_device_type": 1 00:12:42.901 }, 00:12:42.901 { 00:12:42.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.901 "dma_device_type": 2 00:12:42.901 }, 00:12:42.901 { 00:12:42.901 "dma_device_id": "system", 00:12:42.901 "dma_device_type": 1 00:12:42.901 }, 00:12:42.901 { 00:12:42.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.901 "dma_device_type": 2 00:12:42.901 }, 00:12:42.901 { 00:12:42.901 "dma_device_id": "system", 00:12:42.901 "dma_device_type": 1 00:12:42.901 }, 00:12:42.901 { 00:12:42.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.901 "dma_device_type": 2 00:12:42.901 } 00:12:42.901 ], 00:12:42.901 "driver_specific": { 00:12:42.901 "raid": { 00:12:42.901 "uuid": "e4f04440-975a-43b4-84f9-fb31d5f686cf", 00:12:42.901 "strip_size_kb": 64, 00:12:42.901 "state": "online", 00:12:42.901 "raid_level": "raid0", 00:12:42.901 "superblock": true, 00:12:42.901 "num_base_bdevs": 4, 00:12:42.901 "num_base_bdevs_discovered": 4, 00:12:42.901 "num_base_bdevs_operational": 4, 00:12:42.901 "base_bdevs_list": [ 00:12:42.901 { 00:12:42.901 "name": "pt1", 00:12:42.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.901 "is_configured": true, 00:12:42.901 "data_offset": 2048, 00:12:42.901 "data_size": 63488 00:12:42.901 }, 00:12:42.901 { 00:12:42.901 "name": "pt2", 00:12:42.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.901 "is_configured": true, 00:12:42.901 "data_offset": 2048, 00:12:42.901 "data_size": 63488 00:12:42.901 }, 00:12:42.901 { 00:12:42.901 "name": "pt3", 00:12:42.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.901 "is_configured": true, 00:12:42.901 "data_offset": 2048, 00:12:42.901 "data_size": 63488 00:12:42.901 }, 00:12:42.901 { 00:12:42.901 "name": "pt4", 00:12:42.901 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:42.901 "is_configured": true, 00:12:42.901 "data_offset": 2048, 00:12:42.901 "data_size": 63488 00:12:42.901 } 00:12:42.901 ] 00:12:42.901 } 00:12:42.901 } 00:12:42.901 }' 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:42.901 pt2 00:12:42.901 pt3 00:12:42.901 pt4' 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.901 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.160 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 [2024-10-17 20:09:28.628288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e4f04440-975a-43b4-84f9-fb31d5f686cf 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e4f04440-975a-43b4-84f9-fb31d5f686cf ']' 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 [2024-10-17 20:09:28.675835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.161 [2024-10-17 20:09:28.676053] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.161 [2024-10-17 20:09:28.676317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.161 [2024-10-17 20:09:28.676558] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.161 [2024-10-17 20:09:28.676595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.161 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.420 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:43.420 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:43.420 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:43.420 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:43.420 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:43.420 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.420 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:43.420 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.421 [2024-10-17 20:09:28.835935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:43.421 [2024-10-17 20:09:28.838724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:43.421 [2024-10-17 20:09:28.838789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:43.421 [2024-10-17 20:09:28.838848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:43.421 [2024-10-17 20:09:28.838921] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:43.421 [2024-10-17 20:09:28.839052] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:43.421 [2024-10-17 20:09:28.839092] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:43.421 [2024-10-17 20:09:28.839138] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:43.421 [2024-10-17 20:09:28.839170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.421 [2024-10-17 20:09:28.839186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:43.421 request: 00:12:43.421 { 00:12:43.421 "name": "raid_bdev1", 00:12:43.421 "raid_level": "raid0", 00:12:43.421 "base_bdevs": [ 00:12:43.421 "malloc1", 00:12:43.421 "malloc2", 00:12:43.421 "malloc3", 00:12:43.421 "malloc4" 00:12:43.421 ], 00:12:43.421 "strip_size_kb": 64, 00:12:43.421 "superblock": false, 00:12:43.421 "method": "bdev_raid_create", 00:12:43.421 "req_id": 1 00:12:43.421 } 00:12:43.421 Got JSON-RPC error response 00:12:43.421 response: 00:12:43.421 { 00:12:43.421 "code": -17, 00:12:43.421 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:43.421 } 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.421 [2024-10-17 20:09:28.904043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:43.421 [2024-10-17 20:09:28.904290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.421 [2024-10-17 20:09:28.904363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:43.421 [2024-10-17 20:09:28.904569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.421 [2024-10-17 20:09:28.907609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.421 [2024-10-17 20:09:28.907673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:43.421 [2024-10-17 20:09:28.907779] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:43.421 [2024-10-17 20:09:28.907858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:43.421 pt1 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.421 "name": "raid_bdev1", 00:12:43.421 "uuid": "e4f04440-975a-43b4-84f9-fb31d5f686cf", 00:12:43.421 "strip_size_kb": 64, 00:12:43.421 "state": "configuring", 00:12:43.421 "raid_level": "raid0", 00:12:43.421 "superblock": true, 00:12:43.421 "num_base_bdevs": 4, 00:12:43.421 "num_base_bdevs_discovered": 1, 00:12:43.421 "num_base_bdevs_operational": 4, 00:12:43.421 "base_bdevs_list": [ 00:12:43.421 { 00:12:43.421 "name": "pt1", 00:12:43.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:43.421 "is_configured": true, 00:12:43.421 "data_offset": 2048, 00:12:43.421 "data_size": 63488 00:12:43.421 }, 00:12:43.421 { 00:12:43.421 "name": null, 00:12:43.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.421 "is_configured": false, 00:12:43.421 "data_offset": 2048, 00:12:43.421 "data_size": 63488 00:12:43.421 }, 00:12:43.421 { 00:12:43.421 "name": null, 00:12:43.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:43.421 "is_configured": false, 00:12:43.421 "data_offset": 2048, 00:12:43.421 "data_size": 63488 00:12:43.421 }, 00:12:43.421 { 00:12:43.421 "name": null, 00:12:43.421 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:43.421 "is_configured": false, 00:12:43.421 "data_offset": 2048, 00:12:43.421 "data_size": 63488 00:12:43.421 } 00:12:43.421 ] 00:12:43.421 }' 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.421 20:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.990 [2024-10-17 20:09:29.460323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:43.990 [2024-10-17 20:09:29.460638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.990 [2024-10-17 20:09:29.460680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:43.990 [2024-10-17 20:09:29.460700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.990 [2024-10-17 20:09:29.461330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.990 [2024-10-17 20:09:29.461368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:43.990 [2024-10-17 20:09:29.461486] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:43.990 [2024-10-17 20:09:29.461524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:43.990 pt2 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.990 [2024-10-17 20:09:29.468354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.990 "name": "raid_bdev1", 00:12:43.990 "uuid": "e4f04440-975a-43b4-84f9-fb31d5f686cf", 00:12:43.990 "strip_size_kb": 64, 00:12:43.990 "state": "configuring", 00:12:43.990 "raid_level": "raid0", 00:12:43.990 "superblock": true, 00:12:43.990 "num_base_bdevs": 4, 00:12:43.990 "num_base_bdevs_discovered": 1, 00:12:43.990 "num_base_bdevs_operational": 4, 00:12:43.990 "base_bdevs_list": [ 00:12:43.990 { 00:12:43.990 "name": "pt1", 00:12:43.990 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:43.990 "is_configured": true, 00:12:43.990 "data_offset": 2048, 00:12:43.990 "data_size": 63488 00:12:43.990 }, 00:12:43.990 { 00:12:43.990 "name": null, 00:12:43.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.990 "is_configured": false, 00:12:43.990 "data_offset": 0, 00:12:43.990 "data_size": 63488 00:12:43.990 }, 00:12:43.990 { 00:12:43.990 "name": null, 00:12:43.990 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:43.990 "is_configured": false, 00:12:43.990 "data_offset": 2048, 00:12:43.990 "data_size": 63488 00:12:43.990 }, 00:12:43.990 { 00:12:43.990 "name": null, 00:12:43.990 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:43.990 "is_configured": false, 00:12:43.990 "data_offset": 2048, 00:12:43.990 "data_size": 63488 00:12:43.990 } 00:12:43.990 ] 00:12:43.990 }' 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.990 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.557 [2024-10-17 20:09:29.976528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:44.557 [2024-10-17 20:09:29.976764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.557 [2024-10-17 20:09:29.976843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:44.557 [2024-10-17 20:09:29.976865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.557 [2024-10-17 20:09:29.977491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.557 [2024-10-17 20:09:29.977517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:44.557 [2024-10-17 20:09:29.977641] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:44.557 [2024-10-17 20:09:29.977675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:44.557 pt2 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.557 [2024-10-17 20:09:29.988451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:44.557 [2024-10-17 20:09:29.988657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.557 [2024-10-17 20:09:29.988735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:44.557 [2024-10-17 20:09:29.988924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.557 [2024-10-17 20:09:29.989437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.557 [2024-10-17 20:09:29.989472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:44.557 [2024-10-17 20:09:29.989554] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:44.557 [2024-10-17 20:09:29.989582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:44.557 pt3 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.557 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.557 [2024-10-17 20:09:29.996441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:44.557 [2024-10-17 20:09:29.996666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.557 [2024-10-17 20:09:29.996736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:44.557 [2024-10-17 20:09:29.996907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.557 [2024-10-17 20:09:29.997527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.557 [2024-10-17 20:09:29.997569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:44.557 [2024-10-17 20:09:29.997653] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:44.557 [2024-10-17 20:09:29.997681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:44.557 [2024-10-17 20:09:29.997846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:44.557 [2024-10-17 20:09:29.997861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:44.557 [2024-10-17 20:09:29.998172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:44.557 [2024-10-17 20:09:29.998353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:44.558 [2024-10-17 20:09:29.998382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:44.558 [2024-10-17 20:09:29.998552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.558 pt4 00:12:44.558 20:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.558 "name": "raid_bdev1", 00:12:44.558 "uuid": "e4f04440-975a-43b4-84f9-fb31d5f686cf", 00:12:44.558 "strip_size_kb": 64, 00:12:44.558 "state": "online", 00:12:44.558 "raid_level": "raid0", 00:12:44.558 "superblock": true, 00:12:44.558 "num_base_bdevs": 4, 00:12:44.558 "num_base_bdevs_discovered": 4, 00:12:44.558 "num_base_bdevs_operational": 4, 00:12:44.558 "base_bdevs_list": [ 00:12:44.558 { 00:12:44.558 "name": "pt1", 00:12:44.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:44.558 "is_configured": true, 00:12:44.558 "data_offset": 2048, 00:12:44.558 "data_size": 63488 00:12:44.558 }, 00:12:44.558 { 00:12:44.558 "name": "pt2", 00:12:44.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:44.558 "is_configured": true, 00:12:44.558 "data_offset": 2048, 00:12:44.558 "data_size": 63488 00:12:44.558 }, 00:12:44.558 { 00:12:44.558 "name": "pt3", 00:12:44.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:44.558 "is_configured": true, 00:12:44.558 "data_offset": 2048, 00:12:44.558 "data_size": 63488 00:12:44.558 }, 00:12:44.558 { 00:12:44.558 "name": "pt4", 00:12:44.558 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:44.558 "is_configured": true, 00:12:44.558 "data_offset": 2048, 00:12:44.558 "data_size": 63488 00:12:44.558 } 00:12:44.558 ] 00:12:44.558 }' 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.558 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:45.123 [2024-10-17 20:09:30.541107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:45.123 "name": "raid_bdev1", 00:12:45.123 "aliases": [ 00:12:45.123 "e4f04440-975a-43b4-84f9-fb31d5f686cf" 00:12:45.123 ], 00:12:45.123 "product_name": "Raid Volume", 00:12:45.123 "block_size": 512, 00:12:45.123 "num_blocks": 253952, 00:12:45.123 "uuid": "e4f04440-975a-43b4-84f9-fb31d5f686cf", 00:12:45.123 "assigned_rate_limits": { 00:12:45.123 "rw_ios_per_sec": 0, 00:12:45.123 "rw_mbytes_per_sec": 0, 00:12:45.123 "r_mbytes_per_sec": 0, 00:12:45.123 "w_mbytes_per_sec": 0 00:12:45.123 }, 00:12:45.123 "claimed": false, 00:12:45.123 "zoned": false, 00:12:45.123 "supported_io_types": { 00:12:45.123 "read": true, 00:12:45.123 "write": true, 00:12:45.123 "unmap": true, 00:12:45.123 "flush": true, 00:12:45.123 "reset": true, 00:12:45.123 "nvme_admin": false, 00:12:45.123 "nvme_io": false, 00:12:45.123 "nvme_io_md": false, 00:12:45.123 "write_zeroes": true, 00:12:45.123 "zcopy": false, 00:12:45.123 "get_zone_info": false, 00:12:45.123 "zone_management": false, 00:12:45.123 "zone_append": false, 00:12:45.123 "compare": false, 00:12:45.123 "compare_and_write": false, 00:12:45.123 "abort": false, 00:12:45.123 "seek_hole": false, 00:12:45.123 "seek_data": false, 00:12:45.123 "copy": false, 00:12:45.123 "nvme_iov_md": false 00:12:45.123 }, 00:12:45.123 "memory_domains": [ 00:12:45.123 { 00:12:45.123 "dma_device_id": "system", 00:12:45.123 "dma_device_type": 1 00:12:45.123 }, 00:12:45.123 { 00:12:45.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.123 "dma_device_type": 2 00:12:45.123 }, 00:12:45.123 { 00:12:45.123 "dma_device_id": "system", 00:12:45.123 "dma_device_type": 1 00:12:45.123 }, 00:12:45.123 { 00:12:45.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.123 "dma_device_type": 2 00:12:45.123 }, 00:12:45.123 { 00:12:45.123 "dma_device_id": "system", 00:12:45.123 "dma_device_type": 1 00:12:45.123 }, 00:12:45.123 { 00:12:45.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.123 "dma_device_type": 2 00:12:45.123 }, 00:12:45.123 { 00:12:45.123 "dma_device_id": "system", 00:12:45.123 "dma_device_type": 1 00:12:45.123 }, 00:12:45.123 { 00:12:45.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.123 "dma_device_type": 2 00:12:45.123 } 00:12:45.123 ], 00:12:45.123 "driver_specific": { 00:12:45.123 "raid": { 00:12:45.123 "uuid": "e4f04440-975a-43b4-84f9-fb31d5f686cf", 00:12:45.123 "strip_size_kb": 64, 00:12:45.123 "state": "online", 00:12:45.123 "raid_level": "raid0", 00:12:45.123 "superblock": true, 00:12:45.123 "num_base_bdevs": 4, 00:12:45.123 "num_base_bdevs_discovered": 4, 00:12:45.123 "num_base_bdevs_operational": 4, 00:12:45.123 "base_bdevs_list": [ 00:12:45.123 { 00:12:45.123 "name": "pt1", 00:12:45.123 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:45.123 "is_configured": true, 00:12:45.123 "data_offset": 2048, 00:12:45.123 "data_size": 63488 00:12:45.123 }, 00:12:45.123 { 00:12:45.123 "name": "pt2", 00:12:45.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.123 "is_configured": true, 00:12:45.123 "data_offset": 2048, 00:12:45.123 "data_size": 63488 00:12:45.123 }, 00:12:45.123 { 00:12:45.123 "name": "pt3", 00:12:45.123 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.123 "is_configured": true, 00:12:45.123 "data_offset": 2048, 00:12:45.123 "data_size": 63488 00:12:45.123 }, 00:12:45.123 { 00:12:45.123 "name": "pt4", 00:12:45.123 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:45.123 "is_configured": true, 00:12:45.123 "data_offset": 2048, 00:12:45.123 "data_size": 63488 00:12:45.123 } 00:12:45.123 ] 00:12:45.123 } 00:12:45.123 } 00:12:45.123 }' 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:45.123 pt2 00:12:45.123 pt3 00:12:45.123 pt4' 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.123 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:45.381 [2024-10-17 20:09:30.941207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e4f04440-975a-43b4-84f9-fb31d5f686cf '!=' e4f04440-975a-43b4-84f9-fb31d5f686cf ']' 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70736 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 70736 ']' 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 70736 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:45.381 20:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70736 00:12:45.381 killing process with pid 70736 00:12:45.381 20:09:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:45.381 20:09:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:45.381 20:09:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70736' 00:12:45.381 20:09:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 70736 00:12:45.381 [2024-10-17 20:09:31.020437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.382 20:09:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 70736 00:12:45.382 [2024-10-17 20:09:31.020610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.382 [2024-10-17 20:09:31.020739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.382 [2024-10-17 20:09:31.020758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:45.948 [2024-10-17 20:09:31.376514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:46.884 20:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:46.884 00:12:46.884 real 0m6.015s 00:12:46.884 user 0m9.078s 00:12:46.884 sys 0m0.892s 00:12:46.884 ************************************ 00:12:46.884 END TEST raid_superblock_test 00:12:46.884 ************************************ 00:12:46.884 20:09:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.884 20:09:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.884 20:09:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:46.884 20:09:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:46.884 20:09:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.884 20:09:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:46.884 ************************************ 00:12:46.884 START TEST raid_read_error_test 00:12:46.884 ************************************ 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4y3t2HwgIj 00:12:46.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71001 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71001 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 71001 ']' 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:46.884 20:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.143 [2024-10-17 20:09:32.564403] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:12:47.143 [2024-10-17 20:09:32.564581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71001 ] 00:12:47.143 [2024-10-17 20:09:32.726702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.401 [2024-10-17 20:09:32.857315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.662 [2024-10-17 20:09:33.059843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.662 [2024-10-17 20:09:33.059908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.921 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:47.921 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:47.921 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:47.921 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:47.921 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.921 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.179 BaseBdev1_malloc 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.179 true 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.179 [2024-10-17 20:09:33.625582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:48.179 [2024-10-17 20:09:33.625671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.179 [2024-10-17 20:09:33.625701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:48.179 [2024-10-17 20:09:33.625720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.179 [2024-10-17 20:09:33.628524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.179 [2024-10-17 20:09:33.628576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:48.179 BaseBdev1 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.179 BaseBdev2_malloc 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.179 true 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.179 [2024-10-17 20:09:33.685602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:48.179 [2024-10-17 20:09:33.685675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.179 [2024-10-17 20:09:33.685704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:48.179 [2024-10-17 20:09:33.685722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.179 [2024-10-17 20:09:33.688522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.179 [2024-10-17 20:09:33.688573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:48.179 BaseBdev2 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.179 BaseBdev3_malloc 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.179 true 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.179 [2024-10-17 20:09:33.763521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:48.179 [2024-10-17 20:09:33.763608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.179 [2024-10-17 20:09:33.763643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:48.179 [2024-10-17 20:09:33.763663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.179 [2024-10-17 20:09:33.766686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.179 [2024-10-17 20:09:33.766882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:48.179 BaseBdev3 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.179 BaseBdev4_malloc 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.179 true 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.179 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.179 [2024-10-17 20:09:33.827928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:48.179 [2024-10-17 20:09:33.828012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.179 [2024-10-17 20:09:33.828043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:48.179 [2024-10-17 20:09:33.828076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.438 [2024-10-17 20:09:33.830824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.438 [2024-10-17 20:09:33.830879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:48.438 BaseBdev4 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.438 [2024-10-17 20:09:33.836045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.438 [2024-10-17 20:09:33.838683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.438 [2024-10-17 20:09:33.838922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.438 [2024-10-17 20:09:33.839097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:48.438 [2024-10-17 20:09:33.839442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:48.438 [2024-10-17 20:09:33.839578] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:48.438 [2024-10-17 20:09:33.839940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:48.438 [2024-10-17 20:09:33.840319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:48.438 [2024-10-17 20:09:33.840437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:48.438 [2024-10-17 20:09:33.840830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.438 "name": "raid_bdev1", 00:12:48.438 "uuid": "d4a9eaf6-aadb-4c20-8956-ee01820163f7", 00:12:48.438 "strip_size_kb": 64, 00:12:48.438 "state": "online", 00:12:48.438 "raid_level": "raid0", 00:12:48.438 "superblock": true, 00:12:48.438 "num_base_bdevs": 4, 00:12:48.438 "num_base_bdevs_discovered": 4, 00:12:48.438 "num_base_bdevs_operational": 4, 00:12:48.438 "base_bdevs_list": [ 00:12:48.438 { 00:12:48.438 "name": "BaseBdev1", 00:12:48.438 "uuid": "ad9139d5-8089-50ad-ae84-f0291fc0ed8a", 00:12:48.438 "is_configured": true, 00:12:48.438 "data_offset": 2048, 00:12:48.438 "data_size": 63488 00:12:48.438 }, 00:12:48.438 { 00:12:48.438 "name": "BaseBdev2", 00:12:48.438 "uuid": "c98e9816-37b1-5675-8123-c12dba91c442", 00:12:48.438 "is_configured": true, 00:12:48.438 "data_offset": 2048, 00:12:48.438 "data_size": 63488 00:12:48.438 }, 00:12:48.438 { 00:12:48.438 "name": "BaseBdev3", 00:12:48.438 "uuid": "8383527a-b60e-58d8-a2e2-2e6ba76d7fc2", 00:12:48.438 "is_configured": true, 00:12:48.438 "data_offset": 2048, 00:12:48.438 "data_size": 63488 00:12:48.438 }, 00:12:48.438 { 00:12:48.438 "name": "BaseBdev4", 00:12:48.438 "uuid": "6088eede-8a49-5da3-99b7-9b67cc64b95a", 00:12:48.438 "is_configured": true, 00:12:48.438 "data_offset": 2048, 00:12:48.438 "data_size": 63488 00:12:48.438 } 00:12:48.438 ] 00:12:48.438 }' 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.438 20:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.005 20:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:49.005 20:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:49.005 [2024-10-17 20:09:34.502520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.940 "name": "raid_bdev1", 00:12:49.940 "uuid": "d4a9eaf6-aadb-4c20-8956-ee01820163f7", 00:12:49.940 "strip_size_kb": 64, 00:12:49.940 "state": "online", 00:12:49.940 "raid_level": "raid0", 00:12:49.940 "superblock": true, 00:12:49.940 "num_base_bdevs": 4, 00:12:49.940 "num_base_bdevs_discovered": 4, 00:12:49.940 "num_base_bdevs_operational": 4, 00:12:49.940 "base_bdevs_list": [ 00:12:49.940 { 00:12:49.940 "name": "BaseBdev1", 00:12:49.940 "uuid": "ad9139d5-8089-50ad-ae84-f0291fc0ed8a", 00:12:49.940 "is_configured": true, 00:12:49.940 "data_offset": 2048, 00:12:49.940 "data_size": 63488 00:12:49.940 }, 00:12:49.940 { 00:12:49.940 "name": "BaseBdev2", 00:12:49.940 "uuid": "c98e9816-37b1-5675-8123-c12dba91c442", 00:12:49.940 "is_configured": true, 00:12:49.940 "data_offset": 2048, 00:12:49.940 "data_size": 63488 00:12:49.940 }, 00:12:49.940 { 00:12:49.940 "name": "BaseBdev3", 00:12:49.940 "uuid": "8383527a-b60e-58d8-a2e2-2e6ba76d7fc2", 00:12:49.940 "is_configured": true, 00:12:49.940 "data_offset": 2048, 00:12:49.940 "data_size": 63488 00:12:49.940 }, 00:12:49.940 { 00:12:49.940 "name": "BaseBdev4", 00:12:49.940 "uuid": "6088eede-8a49-5da3-99b7-9b67cc64b95a", 00:12:49.940 "is_configured": true, 00:12:49.940 "data_offset": 2048, 00:12:49.940 "data_size": 63488 00:12:49.940 } 00:12:49.940 ] 00:12:49.940 }' 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.940 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.507 [2024-10-17 20:09:35.936858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.507 [2024-10-17 20:09:35.937082] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.507 [2024-10-17 20:09:35.940523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.507 [2024-10-17 20:09:35.940768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.507 { 00:12:50.507 "results": [ 00:12:50.507 { 00:12:50.507 "job": "raid_bdev1", 00:12:50.507 "core_mask": "0x1", 00:12:50.507 "workload": "randrw", 00:12:50.507 "percentage": 50, 00:12:50.507 "status": "finished", 00:12:50.507 "queue_depth": 1, 00:12:50.507 "io_size": 131072, 00:12:50.507 "runtime": 1.432051, 00:12:50.507 "iops": 10639.285891354428, 00:12:50.507 "mibps": 1329.9107364193035, 00:12:50.507 "io_failed": 1, 00:12:50.507 "io_timeout": 0, 00:12:50.507 "avg_latency_us": 131.7527711849744, 00:12:50.507 "min_latency_us": 40.72727272727273, 00:12:50.507 "max_latency_us": 1951.1854545454546 00:12:50.507 } 00:12:50.507 ], 00:12:50.507 "core_count": 1 00:12:50.507 } 00:12:50.507 [2024-10-17 20:09:35.941013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.507 [2024-10-17 20:09:35.941047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71001 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 71001 ']' 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 71001 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71001 00:12:50.507 killing process with pid 71001 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71001' 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 71001 00:12:50.507 [2024-10-17 20:09:35.975581] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:50.507 20:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 71001 00:12:50.765 [2024-10-17 20:09:36.270764] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:51.700 20:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:51.701 20:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4y3t2HwgIj 00:12:51.701 20:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:51.701 20:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:51.701 ************************************ 00:12:51.701 END TEST raid_read_error_test 00:12:51.701 ************************************ 00:12:51.701 20:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:51.701 20:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:51.701 20:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:51.701 20:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:51.701 00:12:51.701 real 0m4.885s 00:12:51.701 user 0m6.063s 00:12:51.701 sys 0m0.598s 00:12:51.701 20:09:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.701 20:09:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.046 20:09:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:52.046 20:09:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:52.046 20:09:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:52.046 20:09:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:52.046 ************************************ 00:12:52.046 START TEST raid_write_error_test 00:12:52.046 ************************************ 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:52.046 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.an4zWbDeQX 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71155 00:12:52.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71155 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71155 ']' 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:52.047 20:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.047 [2024-10-17 20:09:37.509080] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:12:52.047 [2024-10-17 20:09:37.509261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71155 ] 00:12:52.305 [2024-10-17 20:09:37.676784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.305 [2024-10-17 20:09:37.810540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.563 [2024-10-17 20:09:38.052084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.563 [2024-10-17 20:09:38.052151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.130 BaseBdev1_malloc 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.130 true 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.130 [2024-10-17 20:09:38.547666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:53.130 [2024-10-17 20:09:38.547741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.130 [2024-10-17 20:09:38.547771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:53.130 [2024-10-17 20:09:38.547789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.130 [2024-10-17 20:09:38.550673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.130 [2024-10-17 20:09:38.550720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:53.130 BaseBdev1 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.130 BaseBdev2_malloc 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.130 true 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.130 [2024-10-17 20:09:38.612704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:53.130 [2024-10-17 20:09:38.612776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.130 [2024-10-17 20:09:38.612803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:53.130 [2024-10-17 20:09:38.612820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.130 [2024-10-17 20:09:38.615781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.130 [2024-10-17 20:09:38.615841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:53.130 BaseBdev2 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.130 BaseBdev3_malloc 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.130 true 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.130 [2024-10-17 20:09:38.697919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:53.130 [2024-10-17 20:09:38.697987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.130 [2024-10-17 20:09:38.698027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:53.130 [2024-10-17 20:09:38.698046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.130 [2024-10-17 20:09:38.700895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.130 [2024-10-17 20:09:38.700940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:53.130 BaseBdev3 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:53.130 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.131 BaseBdev4_malloc 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.131 true 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.131 [2024-10-17 20:09:38.756174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:53.131 [2024-10-17 20:09:38.756238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.131 [2024-10-17 20:09:38.756265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:53.131 [2024-10-17 20:09:38.756281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.131 [2024-10-17 20:09:38.759129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.131 [2024-10-17 20:09:38.759191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:53.131 BaseBdev4 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.131 [2024-10-17 20:09:38.764263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.131 [2024-10-17 20:09:38.766722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.131 [2024-10-17 20:09:38.766869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.131 [2024-10-17 20:09:38.766969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:53.131 [2024-10-17 20:09:38.767281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:53.131 [2024-10-17 20:09:38.767316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:53.131 [2024-10-17 20:09:38.767621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:53.131 [2024-10-17 20:09:38.767847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:53.131 [2024-10-17 20:09:38.767869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:53.131 [2024-10-17 20:09:38.768144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.131 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.390 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.390 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.390 "name": "raid_bdev1", 00:12:53.390 "uuid": "6a222230-df68-419b-9b31-aeaa4aaad151", 00:12:53.390 "strip_size_kb": 64, 00:12:53.390 "state": "online", 00:12:53.390 "raid_level": "raid0", 00:12:53.390 "superblock": true, 00:12:53.390 "num_base_bdevs": 4, 00:12:53.390 "num_base_bdevs_discovered": 4, 00:12:53.390 "num_base_bdevs_operational": 4, 00:12:53.390 "base_bdevs_list": [ 00:12:53.390 { 00:12:53.390 "name": "BaseBdev1", 00:12:53.390 "uuid": "19f6f2af-39c3-57e8-9f9e-fe6029efad0b", 00:12:53.390 "is_configured": true, 00:12:53.390 "data_offset": 2048, 00:12:53.390 "data_size": 63488 00:12:53.390 }, 00:12:53.390 { 00:12:53.390 "name": "BaseBdev2", 00:12:53.390 "uuid": "63ffcb5c-aeba-55c0-aedb-603d8258dc67", 00:12:53.390 "is_configured": true, 00:12:53.390 "data_offset": 2048, 00:12:53.390 "data_size": 63488 00:12:53.390 }, 00:12:53.390 { 00:12:53.390 "name": "BaseBdev3", 00:12:53.390 "uuid": "96e318af-d11c-536c-9256-c7d2b4d22081", 00:12:53.390 "is_configured": true, 00:12:53.390 "data_offset": 2048, 00:12:53.390 "data_size": 63488 00:12:53.390 }, 00:12:53.390 { 00:12:53.390 "name": "BaseBdev4", 00:12:53.390 "uuid": "f8511c46-204a-5467-99bd-a4c83bbf60f6", 00:12:53.390 "is_configured": true, 00:12:53.390 "data_offset": 2048, 00:12:53.390 "data_size": 63488 00:12:53.390 } 00:12:53.390 ] 00:12:53.390 }' 00:12:53.390 20:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.390 20:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.956 20:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:53.956 20:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:53.956 [2024-10-17 20:09:39.450030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.905 "name": "raid_bdev1", 00:12:54.905 "uuid": "6a222230-df68-419b-9b31-aeaa4aaad151", 00:12:54.905 "strip_size_kb": 64, 00:12:54.905 "state": "online", 00:12:54.905 "raid_level": "raid0", 00:12:54.905 "superblock": true, 00:12:54.905 "num_base_bdevs": 4, 00:12:54.905 "num_base_bdevs_discovered": 4, 00:12:54.905 "num_base_bdevs_operational": 4, 00:12:54.905 "base_bdevs_list": [ 00:12:54.905 { 00:12:54.905 "name": "BaseBdev1", 00:12:54.905 "uuid": "19f6f2af-39c3-57e8-9f9e-fe6029efad0b", 00:12:54.905 "is_configured": true, 00:12:54.905 "data_offset": 2048, 00:12:54.905 "data_size": 63488 00:12:54.905 }, 00:12:54.905 { 00:12:54.905 "name": "BaseBdev2", 00:12:54.905 "uuid": "63ffcb5c-aeba-55c0-aedb-603d8258dc67", 00:12:54.905 "is_configured": true, 00:12:54.905 "data_offset": 2048, 00:12:54.905 "data_size": 63488 00:12:54.905 }, 00:12:54.905 { 00:12:54.905 "name": "BaseBdev3", 00:12:54.905 "uuid": "96e318af-d11c-536c-9256-c7d2b4d22081", 00:12:54.905 "is_configured": true, 00:12:54.905 "data_offset": 2048, 00:12:54.905 "data_size": 63488 00:12:54.905 }, 00:12:54.905 { 00:12:54.905 "name": "BaseBdev4", 00:12:54.905 "uuid": "f8511c46-204a-5467-99bd-a4c83bbf60f6", 00:12:54.905 "is_configured": true, 00:12:54.905 "data_offset": 2048, 00:12:54.905 "data_size": 63488 00:12:54.905 } 00:12:54.905 ] 00:12:54.905 }' 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.905 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.473 [2024-10-17 20:09:40.909180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.473 [2024-10-17 20:09:40.909223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.473 [2024-10-17 20:09:40.912478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.473 [2024-10-17 20:09:40.912559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.473 [2024-10-17 20:09:40.912622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.473 [2024-10-17 20:09:40.912641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:55.473 { 00:12:55.473 "results": [ 00:12:55.473 { 00:12:55.473 "job": "raid_bdev1", 00:12:55.473 "core_mask": "0x1", 00:12:55.473 "workload": "randrw", 00:12:55.473 "percentage": 50, 00:12:55.473 "status": "finished", 00:12:55.473 "queue_depth": 1, 00:12:55.473 "io_size": 131072, 00:12:55.473 "runtime": 1.456314, 00:12:55.473 "iops": 10851.368592212943, 00:12:55.473 "mibps": 1356.4210740266178, 00:12:55.473 "io_failed": 1, 00:12:55.473 "io_timeout": 0, 00:12:55.473 "avg_latency_us": 129.43323715515058, 00:12:55.473 "min_latency_us": 38.167272727272724, 00:12:55.473 "max_latency_us": 1899.0545454545454 00:12:55.473 } 00:12:55.473 ], 00:12:55.473 "core_count": 1 00:12:55.473 } 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71155 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71155 ']' 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71155 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71155 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:55.473 killing process with pid 71155 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71155' 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71155 00:12:55.473 [2024-10-17 20:09:40.948007] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.473 20:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71155 00:12:55.732 [2024-10-17 20:09:41.226375] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:56.668 20:09:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.an4zWbDeQX 00:12:56.668 20:09:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:56.668 20:09:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:56.668 20:09:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:12:56.668 20:09:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:56.668 20:09:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:56.668 20:09:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:56.668 20:09:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:12:56.668 00:12:56.668 real 0m4.879s 00:12:56.668 user 0m6.055s 00:12:56.668 sys 0m0.608s 00:12:56.668 20:09:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.668 ************************************ 00:12:56.668 END TEST raid_write_error_test 00:12:56.668 ************************************ 00:12:56.668 20:09:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.927 20:09:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:56.927 20:09:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:56.927 20:09:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:56.927 20:09:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.927 20:09:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:56.927 ************************************ 00:12:56.927 START TEST raid_state_function_test 00:12:56.927 ************************************ 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71301 00:12:56.927 Process raid pid: 71301 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71301' 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71301 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71301 ']' 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:56.927 20:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.927 [2024-10-17 20:09:42.459915] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:12:56.927 [2024-10-17 20:09:42.461101] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.185 [2024-10-17 20:09:42.636850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.185 [2024-10-17 20:09:42.771212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.443 [2024-10-17 20:09:42.970478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.443 [2024-10-17 20:09:42.970554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.008 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:58.008 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:58.008 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:58.008 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.008 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.008 [2024-10-17 20:09:43.412310] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:58.008 [2024-10-17 20:09:43.412395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:58.008 [2024-10-17 20:09:43.412411] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:58.008 [2024-10-17 20:09:43.412428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:58.008 [2024-10-17 20:09:43.412438] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:58.008 [2024-10-17 20:09:43.412452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:58.008 [2024-10-17 20:09:43.412462] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:58.009 [2024-10-17 20:09:43.412476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.009 "name": "Existed_Raid", 00:12:58.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.009 "strip_size_kb": 64, 00:12:58.009 "state": "configuring", 00:12:58.009 "raid_level": "concat", 00:12:58.009 "superblock": false, 00:12:58.009 "num_base_bdevs": 4, 00:12:58.009 "num_base_bdevs_discovered": 0, 00:12:58.009 "num_base_bdevs_operational": 4, 00:12:58.009 "base_bdevs_list": [ 00:12:58.009 { 00:12:58.009 "name": "BaseBdev1", 00:12:58.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.009 "is_configured": false, 00:12:58.009 "data_offset": 0, 00:12:58.009 "data_size": 0 00:12:58.009 }, 00:12:58.009 { 00:12:58.009 "name": "BaseBdev2", 00:12:58.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.009 "is_configured": false, 00:12:58.009 "data_offset": 0, 00:12:58.009 "data_size": 0 00:12:58.009 }, 00:12:58.009 { 00:12:58.009 "name": "BaseBdev3", 00:12:58.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.009 "is_configured": false, 00:12:58.009 "data_offset": 0, 00:12:58.009 "data_size": 0 00:12:58.009 }, 00:12:58.009 { 00:12:58.009 "name": "BaseBdev4", 00:12:58.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.009 "is_configured": false, 00:12:58.009 "data_offset": 0, 00:12:58.009 "data_size": 0 00:12:58.009 } 00:12:58.009 ] 00:12:58.009 }' 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.009 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.575 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:58.575 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.575 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.575 [2024-10-17 20:09:43.972356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:58.575 [2024-10-17 20:09:43.972467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:58.575 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.575 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:58.575 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.575 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.575 [2024-10-17 20:09:43.980392] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:58.575 [2024-10-17 20:09:43.980505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:58.575 [2024-10-17 20:09:43.980519] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:58.575 [2024-10-17 20:09:43.980533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:58.575 [2024-10-17 20:09:43.980542] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:58.575 [2024-10-17 20:09:43.980571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:58.575 [2024-10-17 20:09:43.980580] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:58.575 [2024-10-17 20:09:43.980594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:58.575 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.575 20:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:58.575 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.575 20:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.575 [2024-10-17 20:09:44.024201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.575 BaseBdev1 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.575 [ 00:12:58.575 { 00:12:58.575 "name": "BaseBdev1", 00:12:58.575 "aliases": [ 00:12:58.575 "26c216fc-7e8d-432b-ada6-391c148d3f1c" 00:12:58.575 ], 00:12:58.575 "product_name": "Malloc disk", 00:12:58.575 "block_size": 512, 00:12:58.575 "num_blocks": 65536, 00:12:58.575 "uuid": "26c216fc-7e8d-432b-ada6-391c148d3f1c", 00:12:58.575 "assigned_rate_limits": { 00:12:58.575 "rw_ios_per_sec": 0, 00:12:58.575 "rw_mbytes_per_sec": 0, 00:12:58.575 "r_mbytes_per_sec": 0, 00:12:58.575 "w_mbytes_per_sec": 0 00:12:58.575 }, 00:12:58.575 "claimed": true, 00:12:58.575 "claim_type": "exclusive_write", 00:12:58.575 "zoned": false, 00:12:58.575 "supported_io_types": { 00:12:58.575 "read": true, 00:12:58.575 "write": true, 00:12:58.575 "unmap": true, 00:12:58.575 "flush": true, 00:12:58.575 "reset": true, 00:12:58.575 "nvme_admin": false, 00:12:58.575 "nvme_io": false, 00:12:58.575 "nvme_io_md": false, 00:12:58.575 "write_zeroes": true, 00:12:58.575 "zcopy": true, 00:12:58.575 "get_zone_info": false, 00:12:58.575 "zone_management": false, 00:12:58.575 "zone_append": false, 00:12:58.575 "compare": false, 00:12:58.575 "compare_and_write": false, 00:12:58.575 "abort": true, 00:12:58.575 "seek_hole": false, 00:12:58.575 "seek_data": false, 00:12:58.575 "copy": true, 00:12:58.575 "nvme_iov_md": false 00:12:58.575 }, 00:12:58.575 "memory_domains": [ 00:12:58.575 { 00:12:58.575 "dma_device_id": "system", 00:12:58.575 "dma_device_type": 1 00:12:58.575 }, 00:12:58.575 { 00:12:58.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.575 "dma_device_type": 2 00:12:58.575 } 00:12:58.575 ], 00:12:58.575 "driver_specific": {} 00:12:58.575 } 00:12:58.575 ] 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.575 "name": "Existed_Raid", 00:12:58.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.575 "strip_size_kb": 64, 00:12:58.575 "state": "configuring", 00:12:58.575 "raid_level": "concat", 00:12:58.575 "superblock": false, 00:12:58.575 "num_base_bdevs": 4, 00:12:58.575 "num_base_bdevs_discovered": 1, 00:12:58.575 "num_base_bdevs_operational": 4, 00:12:58.575 "base_bdevs_list": [ 00:12:58.575 { 00:12:58.575 "name": "BaseBdev1", 00:12:58.575 "uuid": "26c216fc-7e8d-432b-ada6-391c148d3f1c", 00:12:58.575 "is_configured": true, 00:12:58.575 "data_offset": 0, 00:12:58.575 "data_size": 65536 00:12:58.575 }, 00:12:58.575 { 00:12:58.575 "name": "BaseBdev2", 00:12:58.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.575 "is_configured": false, 00:12:58.575 "data_offset": 0, 00:12:58.575 "data_size": 0 00:12:58.575 }, 00:12:58.575 { 00:12:58.575 "name": "BaseBdev3", 00:12:58.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.575 "is_configured": false, 00:12:58.575 "data_offset": 0, 00:12:58.575 "data_size": 0 00:12:58.575 }, 00:12:58.575 { 00:12:58.575 "name": "BaseBdev4", 00:12:58.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.575 "is_configured": false, 00:12:58.575 "data_offset": 0, 00:12:58.575 "data_size": 0 00:12:58.575 } 00:12:58.575 ] 00:12:58.575 }' 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.575 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.141 [2024-10-17 20:09:44.548469] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.141 [2024-10-17 20:09:44.548550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.141 [2024-10-17 20:09:44.556469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.141 [2024-10-17 20:09:44.558973] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:59.141 [2024-10-17 20:09:44.559070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:59.141 [2024-10-17 20:09:44.559086] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:59.141 [2024-10-17 20:09:44.559102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:59.141 [2024-10-17 20:09:44.559112] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:59.141 [2024-10-17 20:09:44.559125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.141 "name": "Existed_Raid", 00:12:59.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.141 "strip_size_kb": 64, 00:12:59.141 "state": "configuring", 00:12:59.141 "raid_level": "concat", 00:12:59.141 "superblock": false, 00:12:59.141 "num_base_bdevs": 4, 00:12:59.141 "num_base_bdevs_discovered": 1, 00:12:59.141 "num_base_bdevs_operational": 4, 00:12:59.141 "base_bdevs_list": [ 00:12:59.141 { 00:12:59.141 "name": "BaseBdev1", 00:12:59.141 "uuid": "26c216fc-7e8d-432b-ada6-391c148d3f1c", 00:12:59.141 "is_configured": true, 00:12:59.141 "data_offset": 0, 00:12:59.141 "data_size": 65536 00:12:59.141 }, 00:12:59.141 { 00:12:59.141 "name": "BaseBdev2", 00:12:59.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.141 "is_configured": false, 00:12:59.141 "data_offset": 0, 00:12:59.141 "data_size": 0 00:12:59.141 }, 00:12:59.141 { 00:12:59.141 "name": "BaseBdev3", 00:12:59.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.141 "is_configured": false, 00:12:59.141 "data_offset": 0, 00:12:59.141 "data_size": 0 00:12:59.141 }, 00:12:59.141 { 00:12:59.141 "name": "BaseBdev4", 00:12:59.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.141 "is_configured": false, 00:12:59.141 "data_offset": 0, 00:12:59.141 "data_size": 0 00:12:59.141 } 00:12:59.141 ] 00:12:59.141 }' 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.141 20:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.707 [2024-10-17 20:09:45.117592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.707 BaseBdev2 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.707 [ 00:12:59.707 { 00:12:59.707 "name": "BaseBdev2", 00:12:59.707 "aliases": [ 00:12:59.707 "42005258-5df4-4be5-8d7c-b1e067e56499" 00:12:59.707 ], 00:12:59.707 "product_name": "Malloc disk", 00:12:59.707 "block_size": 512, 00:12:59.707 "num_blocks": 65536, 00:12:59.707 "uuid": "42005258-5df4-4be5-8d7c-b1e067e56499", 00:12:59.707 "assigned_rate_limits": { 00:12:59.707 "rw_ios_per_sec": 0, 00:12:59.707 "rw_mbytes_per_sec": 0, 00:12:59.707 "r_mbytes_per_sec": 0, 00:12:59.707 "w_mbytes_per_sec": 0 00:12:59.707 }, 00:12:59.707 "claimed": true, 00:12:59.707 "claim_type": "exclusive_write", 00:12:59.707 "zoned": false, 00:12:59.707 "supported_io_types": { 00:12:59.707 "read": true, 00:12:59.707 "write": true, 00:12:59.707 "unmap": true, 00:12:59.707 "flush": true, 00:12:59.707 "reset": true, 00:12:59.707 "nvme_admin": false, 00:12:59.707 "nvme_io": false, 00:12:59.707 "nvme_io_md": false, 00:12:59.707 "write_zeroes": true, 00:12:59.707 "zcopy": true, 00:12:59.707 "get_zone_info": false, 00:12:59.707 "zone_management": false, 00:12:59.707 "zone_append": false, 00:12:59.707 "compare": false, 00:12:59.707 "compare_and_write": false, 00:12:59.707 "abort": true, 00:12:59.707 "seek_hole": false, 00:12:59.707 "seek_data": false, 00:12:59.707 "copy": true, 00:12:59.707 "nvme_iov_md": false 00:12:59.707 }, 00:12:59.707 "memory_domains": [ 00:12:59.707 { 00:12:59.707 "dma_device_id": "system", 00:12:59.707 "dma_device_type": 1 00:12:59.707 }, 00:12:59.707 { 00:12:59.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.707 "dma_device_type": 2 00:12:59.707 } 00:12:59.707 ], 00:12:59.707 "driver_specific": {} 00:12:59.707 } 00:12:59.707 ] 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.707 "name": "Existed_Raid", 00:12:59.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.707 "strip_size_kb": 64, 00:12:59.707 "state": "configuring", 00:12:59.707 "raid_level": "concat", 00:12:59.707 "superblock": false, 00:12:59.707 "num_base_bdevs": 4, 00:12:59.707 "num_base_bdevs_discovered": 2, 00:12:59.707 "num_base_bdevs_operational": 4, 00:12:59.707 "base_bdevs_list": [ 00:12:59.707 { 00:12:59.707 "name": "BaseBdev1", 00:12:59.707 "uuid": "26c216fc-7e8d-432b-ada6-391c148d3f1c", 00:12:59.707 "is_configured": true, 00:12:59.707 "data_offset": 0, 00:12:59.707 "data_size": 65536 00:12:59.707 }, 00:12:59.707 { 00:12:59.707 "name": "BaseBdev2", 00:12:59.707 "uuid": "42005258-5df4-4be5-8d7c-b1e067e56499", 00:12:59.707 "is_configured": true, 00:12:59.707 "data_offset": 0, 00:12:59.707 "data_size": 65536 00:12:59.707 }, 00:12:59.707 { 00:12:59.707 "name": "BaseBdev3", 00:12:59.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.707 "is_configured": false, 00:12:59.707 "data_offset": 0, 00:12:59.707 "data_size": 0 00:12:59.707 }, 00:12:59.707 { 00:12:59.707 "name": "BaseBdev4", 00:12:59.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.707 "is_configured": false, 00:12:59.707 "data_offset": 0, 00:12:59.707 "data_size": 0 00:12:59.707 } 00:12:59.707 ] 00:12:59.707 }' 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.707 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.272 [2024-10-17 20:09:45.771868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.272 BaseBdev3 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.272 [ 00:13:00.272 { 00:13:00.272 "name": "BaseBdev3", 00:13:00.272 "aliases": [ 00:13:00.272 "5ef329d7-c405-4dd0-a51f-78309e1c0d1d" 00:13:00.272 ], 00:13:00.272 "product_name": "Malloc disk", 00:13:00.272 "block_size": 512, 00:13:00.272 "num_blocks": 65536, 00:13:00.272 "uuid": "5ef329d7-c405-4dd0-a51f-78309e1c0d1d", 00:13:00.272 "assigned_rate_limits": { 00:13:00.272 "rw_ios_per_sec": 0, 00:13:00.272 "rw_mbytes_per_sec": 0, 00:13:00.272 "r_mbytes_per_sec": 0, 00:13:00.272 "w_mbytes_per_sec": 0 00:13:00.272 }, 00:13:00.272 "claimed": true, 00:13:00.272 "claim_type": "exclusive_write", 00:13:00.272 "zoned": false, 00:13:00.272 "supported_io_types": { 00:13:00.272 "read": true, 00:13:00.272 "write": true, 00:13:00.272 "unmap": true, 00:13:00.272 "flush": true, 00:13:00.272 "reset": true, 00:13:00.272 "nvme_admin": false, 00:13:00.272 "nvme_io": false, 00:13:00.272 "nvme_io_md": false, 00:13:00.272 "write_zeroes": true, 00:13:00.272 "zcopy": true, 00:13:00.272 "get_zone_info": false, 00:13:00.272 "zone_management": false, 00:13:00.272 "zone_append": false, 00:13:00.272 "compare": false, 00:13:00.272 "compare_and_write": false, 00:13:00.272 "abort": true, 00:13:00.272 "seek_hole": false, 00:13:00.272 "seek_data": false, 00:13:00.272 "copy": true, 00:13:00.272 "nvme_iov_md": false 00:13:00.272 }, 00:13:00.272 "memory_domains": [ 00:13:00.272 { 00:13:00.272 "dma_device_id": "system", 00:13:00.272 "dma_device_type": 1 00:13:00.272 }, 00:13:00.272 { 00:13:00.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.272 "dma_device_type": 2 00:13:00.272 } 00:13:00.272 ], 00:13:00.272 "driver_specific": {} 00:13:00.272 } 00:13:00.272 ] 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.272 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.273 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.273 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.273 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.273 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.273 "name": "Existed_Raid", 00:13:00.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.273 "strip_size_kb": 64, 00:13:00.273 "state": "configuring", 00:13:00.273 "raid_level": "concat", 00:13:00.273 "superblock": false, 00:13:00.273 "num_base_bdevs": 4, 00:13:00.273 "num_base_bdevs_discovered": 3, 00:13:00.273 "num_base_bdevs_operational": 4, 00:13:00.273 "base_bdevs_list": [ 00:13:00.273 { 00:13:00.273 "name": "BaseBdev1", 00:13:00.273 "uuid": "26c216fc-7e8d-432b-ada6-391c148d3f1c", 00:13:00.273 "is_configured": true, 00:13:00.273 "data_offset": 0, 00:13:00.273 "data_size": 65536 00:13:00.273 }, 00:13:00.273 { 00:13:00.273 "name": "BaseBdev2", 00:13:00.273 "uuid": "42005258-5df4-4be5-8d7c-b1e067e56499", 00:13:00.273 "is_configured": true, 00:13:00.273 "data_offset": 0, 00:13:00.273 "data_size": 65536 00:13:00.273 }, 00:13:00.273 { 00:13:00.273 "name": "BaseBdev3", 00:13:00.273 "uuid": "5ef329d7-c405-4dd0-a51f-78309e1c0d1d", 00:13:00.273 "is_configured": true, 00:13:00.273 "data_offset": 0, 00:13:00.273 "data_size": 65536 00:13:00.273 }, 00:13:00.273 { 00:13:00.273 "name": "BaseBdev4", 00:13:00.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.273 "is_configured": false, 00:13:00.273 "data_offset": 0, 00:13:00.273 "data_size": 0 00:13:00.273 } 00:13:00.273 ] 00:13:00.273 }' 00:13:00.273 20:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.273 20:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.839 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:00.839 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.839 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.839 [2024-10-17 20:09:46.395205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:00.839 [2024-10-17 20:09:46.395297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:00.839 [2024-10-17 20:09:46.395310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:00.839 [2024-10-17 20:09:46.395704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:00.839 [2024-10-17 20:09:46.395928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:00.839 [2024-10-17 20:09:46.395962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:00.839 [2024-10-17 20:09:46.396314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.839 BaseBdev4 00:13:00.839 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.840 [ 00:13:00.840 { 00:13:00.840 "name": "BaseBdev4", 00:13:00.840 "aliases": [ 00:13:00.840 "232a340f-78f4-4f65-b353-d5bad38fdbeb" 00:13:00.840 ], 00:13:00.840 "product_name": "Malloc disk", 00:13:00.840 "block_size": 512, 00:13:00.840 "num_blocks": 65536, 00:13:00.840 "uuid": "232a340f-78f4-4f65-b353-d5bad38fdbeb", 00:13:00.840 "assigned_rate_limits": { 00:13:00.840 "rw_ios_per_sec": 0, 00:13:00.840 "rw_mbytes_per_sec": 0, 00:13:00.840 "r_mbytes_per_sec": 0, 00:13:00.840 "w_mbytes_per_sec": 0 00:13:00.840 }, 00:13:00.840 "claimed": true, 00:13:00.840 "claim_type": "exclusive_write", 00:13:00.840 "zoned": false, 00:13:00.840 "supported_io_types": { 00:13:00.840 "read": true, 00:13:00.840 "write": true, 00:13:00.840 "unmap": true, 00:13:00.840 "flush": true, 00:13:00.840 "reset": true, 00:13:00.840 "nvme_admin": false, 00:13:00.840 "nvme_io": false, 00:13:00.840 "nvme_io_md": false, 00:13:00.840 "write_zeroes": true, 00:13:00.840 "zcopy": true, 00:13:00.840 "get_zone_info": false, 00:13:00.840 "zone_management": false, 00:13:00.840 "zone_append": false, 00:13:00.840 "compare": false, 00:13:00.840 "compare_and_write": false, 00:13:00.840 "abort": true, 00:13:00.840 "seek_hole": false, 00:13:00.840 "seek_data": false, 00:13:00.840 "copy": true, 00:13:00.840 "nvme_iov_md": false 00:13:00.840 }, 00:13:00.840 "memory_domains": [ 00:13:00.840 { 00:13:00.840 "dma_device_id": "system", 00:13:00.840 "dma_device_type": 1 00:13:00.840 }, 00:13:00.840 { 00:13:00.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.840 "dma_device_type": 2 00:13:00.840 } 00:13:00.840 ], 00:13:00.840 "driver_specific": {} 00:13:00.840 } 00:13:00.840 ] 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.840 "name": "Existed_Raid", 00:13:00.840 "uuid": "808be2bf-bb76-40af-ad7d-738e5fc263b4", 00:13:00.840 "strip_size_kb": 64, 00:13:00.840 "state": "online", 00:13:00.840 "raid_level": "concat", 00:13:00.840 "superblock": false, 00:13:00.840 "num_base_bdevs": 4, 00:13:00.840 "num_base_bdevs_discovered": 4, 00:13:00.840 "num_base_bdevs_operational": 4, 00:13:00.840 "base_bdevs_list": [ 00:13:00.840 { 00:13:00.840 "name": "BaseBdev1", 00:13:00.840 "uuid": "26c216fc-7e8d-432b-ada6-391c148d3f1c", 00:13:00.840 "is_configured": true, 00:13:00.840 "data_offset": 0, 00:13:00.840 "data_size": 65536 00:13:00.840 }, 00:13:00.840 { 00:13:00.840 "name": "BaseBdev2", 00:13:00.840 "uuid": "42005258-5df4-4be5-8d7c-b1e067e56499", 00:13:00.840 "is_configured": true, 00:13:00.840 "data_offset": 0, 00:13:00.840 "data_size": 65536 00:13:00.840 }, 00:13:00.840 { 00:13:00.840 "name": "BaseBdev3", 00:13:00.840 "uuid": "5ef329d7-c405-4dd0-a51f-78309e1c0d1d", 00:13:00.840 "is_configured": true, 00:13:00.840 "data_offset": 0, 00:13:00.840 "data_size": 65536 00:13:00.840 }, 00:13:00.840 { 00:13:00.840 "name": "BaseBdev4", 00:13:00.840 "uuid": "232a340f-78f4-4f65-b353-d5bad38fdbeb", 00:13:00.840 "is_configured": true, 00:13:00.840 "data_offset": 0, 00:13:00.840 "data_size": 65536 00:13:00.840 } 00:13:00.840 ] 00:13:00.840 }' 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.840 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.411 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:01.411 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:01.411 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:01.411 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:01.411 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:01.411 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:01.411 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:01.411 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.411 20:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:01.411 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.411 [2024-10-17 20:09:46.975911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.411 20:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.411 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:01.411 "name": "Existed_Raid", 00:13:01.411 "aliases": [ 00:13:01.411 "808be2bf-bb76-40af-ad7d-738e5fc263b4" 00:13:01.411 ], 00:13:01.411 "product_name": "Raid Volume", 00:13:01.411 "block_size": 512, 00:13:01.411 "num_blocks": 262144, 00:13:01.411 "uuid": "808be2bf-bb76-40af-ad7d-738e5fc263b4", 00:13:01.411 "assigned_rate_limits": { 00:13:01.411 "rw_ios_per_sec": 0, 00:13:01.411 "rw_mbytes_per_sec": 0, 00:13:01.411 "r_mbytes_per_sec": 0, 00:13:01.411 "w_mbytes_per_sec": 0 00:13:01.411 }, 00:13:01.411 "claimed": false, 00:13:01.411 "zoned": false, 00:13:01.411 "supported_io_types": { 00:13:01.411 "read": true, 00:13:01.411 "write": true, 00:13:01.411 "unmap": true, 00:13:01.411 "flush": true, 00:13:01.411 "reset": true, 00:13:01.411 "nvme_admin": false, 00:13:01.411 "nvme_io": false, 00:13:01.411 "nvme_io_md": false, 00:13:01.411 "write_zeroes": true, 00:13:01.411 "zcopy": false, 00:13:01.411 "get_zone_info": false, 00:13:01.411 "zone_management": false, 00:13:01.411 "zone_append": false, 00:13:01.411 "compare": false, 00:13:01.411 "compare_and_write": false, 00:13:01.411 "abort": false, 00:13:01.411 "seek_hole": false, 00:13:01.411 "seek_data": false, 00:13:01.411 "copy": false, 00:13:01.411 "nvme_iov_md": false 00:13:01.411 }, 00:13:01.411 "memory_domains": [ 00:13:01.411 { 00:13:01.411 "dma_device_id": "system", 00:13:01.411 "dma_device_type": 1 00:13:01.411 }, 00:13:01.411 { 00:13:01.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.411 "dma_device_type": 2 00:13:01.411 }, 00:13:01.411 { 00:13:01.411 "dma_device_id": "system", 00:13:01.411 "dma_device_type": 1 00:13:01.411 }, 00:13:01.411 { 00:13:01.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.411 "dma_device_type": 2 00:13:01.411 }, 00:13:01.411 { 00:13:01.411 "dma_device_id": "system", 00:13:01.411 "dma_device_type": 1 00:13:01.411 }, 00:13:01.411 { 00:13:01.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.411 "dma_device_type": 2 00:13:01.411 }, 00:13:01.411 { 00:13:01.411 "dma_device_id": "system", 00:13:01.411 "dma_device_type": 1 00:13:01.411 }, 00:13:01.411 { 00:13:01.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.411 "dma_device_type": 2 00:13:01.411 } 00:13:01.411 ], 00:13:01.411 "driver_specific": { 00:13:01.411 "raid": { 00:13:01.411 "uuid": "808be2bf-bb76-40af-ad7d-738e5fc263b4", 00:13:01.411 "strip_size_kb": 64, 00:13:01.411 "state": "online", 00:13:01.411 "raid_level": "concat", 00:13:01.411 "superblock": false, 00:13:01.411 "num_base_bdevs": 4, 00:13:01.411 "num_base_bdevs_discovered": 4, 00:13:01.411 "num_base_bdevs_operational": 4, 00:13:01.411 "base_bdevs_list": [ 00:13:01.411 { 00:13:01.411 "name": "BaseBdev1", 00:13:01.411 "uuid": "26c216fc-7e8d-432b-ada6-391c148d3f1c", 00:13:01.411 "is_configured": true, 00:13:01.411 "data_offset": 0, 00:13:01.411 "data_size": 65536 00:13:01.411 }, 00:13:01.411 { 00:13:01.411 "name": "BaseBdev2", 00:13:01.411 "uuid": "42005258-5df4-4be5-8d7c-b1e067e56499", 00:13:01.411 "is_configured": true, 00:13:01.411 "data_offset": 0, 00:13:01.411 "data_size": 65536 00:13:01.411 }, 00:13:01.411 { 00:13:01.411 "name": "BaseBdev3", 00:13:01.411 "uuid": "5ef329d7-c405-4dd0-a51f-78309e1c0d1d", 00:13:01.411 "is_configured": true, 00:13:01.411 "data_offset": 0, 00:13:01.411 "data_size": 65536 00:13:01.411 }, 00:13:01.411 { 00:13:01.411 "name": "BaseBdev4", 00:13:01.411 "uuid": "232a340f-78f4-4f65-b353-d5bad38fdbeb", 00:13:01.411 "is_configured": true, 00:13:01.411 "data_offset": 0, 00:13:01.411 "data_size": 65536 00:13:01.411 } 00:13:01.411 ] 00:13:01.411 } 00:13:01.411 } 00:13:01.411 }' 00:13:01.411 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:01.670 BaseBdev2 00:13:01.670 BaseBdev3 00:13:01.670 BaseBdev4' 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.670 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.928 [2024-10-17 20:09:47.339598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.928 [2024-10-17 20:09:47.339642] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.928 [2024-10-17 20:09:47.339708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.928 "name": "Existed_Raid", 00:13:01.928 "uuid": "808be2bf-bb76-40af-ad7d-738e5fc263b4", 00:13:01.928 "strip_size_kb": 64, 00:13:01.928 "state": "offline", 00:13:01.928 "raid_level": "concat", 00:13:01.928 "superblock": false, 00:13:01.928 "num_base_bdevs": 4, 00:13:01.928 "num_base_bdevs_discovered": 3, 00:13:01.928 "num_base_bdevs_operational": 3, 00:13:01.928 "base_bdevs_list": [ 00:13:01.928 { 00:13:01.928 "name": null, 00:13:01.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.928 "is_configured": false, 00:13:01.928 "data_offset": 0, 00:13:01.928 "data_size": 65536 00:13:01.928 }, 00:13:01.928 { 00:13:01.928 "name": "BaseBdev2", 00:13:01.928 "uuid": "42005258-5df4-4be5-8d7c-b1e067e56499", 00:13:01.928 "is_configured": true, 00:13:01.928 "data_offset": 0, 00:13:01.928 "data_size": 65536 00:13:01.928 }, 00:13:01.928 { 00:13:01.928 "name": "BaseBdev3", 00:13:01.928 "uuid": "5ef329d7-c405-4dd0-a51f-78309e1c0d1d", 00:13:01.928 "is_configured": true, 00:13:01.928 "data_offset": 0, 00:13:01.928 "data_size": 65536 00:13:01.928 }, 00:13:01.928 { 00:13:01.928 "name": "BaseBdev4", 00:13:01.928 "uuid": "232a340f-78f4-4f65-b353-d5bad38fdbeb", 00:13:01.928 "is_configured": true, 00:13:01.928 "data_offset": 0, 00:13:01.928 "data_size": 65536 00:13:01.928 } 00:13:01.928 ] 00:13:01.928 }' 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.928 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.496 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:02.496 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:02.496 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:02.496 20:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.496 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.496 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.496 20:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.496 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:02.496 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:02.496 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:02.496 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.496 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.496 [2024-10-17 20:09:48.015304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:02.496 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.496 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:02.496 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:02.496 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.496 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.496 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:02.496 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.496 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.755 [2024-10-17 20:09:48.151857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.755 [2024-10-17 20:09:48.301959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:02.755 [2024-10-17 20:09:48.302056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.755 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.014 BaseBdev2 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.014 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.015 [ 00:13:03.015 { 00:13:03.015 "name": "BaseBdev2", 00:13:03.015 "aliases": [ 00:13:03.015 "daf9615a-0e73-4143-9ecf-0be92eca9d1c" 00:13:03.015 ], 00:13:03.015 "product_name": "Malloc disk", 00:13:03.015 "block_size": 512, 00:13:03.015 "num_blocks": 65536, 00:13:03.015 "uuid": "daf9615a-0e73-4143-9ecf-0be92eca9d1c", 00:13:03.015 "assigned_rate_limits": { 00:13:03.015 "rw_ios_per_sec": 0, 00:13:03.015 "rw_mbytes_per_sec": 0, 00:13:03.015 "r_mbytes_per_sec": 0, 00:13:03.015 "w_mbytes_per_sec": 0 00:13:03.015 }, 00:13:03.015 "claimed": false, 00:13:03.015 "zoned": false, 00:13:03.015 "supported_io_types": { 00:13:03.015 "read": true, 00:13:03.015 "write": true, 00:13:03.015 "unmap": true, 00:13:03.015 "flush": true, 00:13:03.015 "reset": true, 00:13:03.015 "nvme_admin": false, 00:13:03.015 "nvme_io": false, 00:13:03.015 "nvme_io_md": false, 00:13:03.015 "write_zeroes": true, 00:13:03.015 "zcopy": true, 00:13:03.015 "get_zone_info": false, 00:13:03.015 "zone_management": false, 00:13:03.015 "zone_append": false, 00:13:03.015 "compare": false, 00:13:03.015 "compare_and_write": false, 00:13:03.015 "abort": true, 00:13:03.015 "seek_hole": false, 00:13:03.015 "seek_data": false, 00:13:03.015 "copy": true, 00:13:03.015 "nvme_iov_md": false 00:13:03.015 }, 00:13:03.015 "memory_domains": [ 00:13:03.015 { 00:13:03.015 "dma_device_id": "system", 00:13:03.015 "dma_device_type": 1 00:13:03.015 }, 00:13:03.015 { 00:13:03.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.015 "dma_device_type": 2 00:13:03.015 } 00:13:03.015 ], 00:13:03.015 "driver_specific": {} 00:13:03.015 } 00:13:03.015 ] 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.015 BaseBdev3 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.015 [ 00:13:03.015 { 00:13:03.015 "name": "BaseBdev3", 00:13:03.015 "aliases": [ 00:13:03.015 "938ec504-30a5-4d0e-8c5a-d7d318a2d094" 00:13:03.015 ], 00:13:03.015 "product_name": "Malloc disk", 00:13:03.015 "block_size": 512, 00:13:03.015 "num_blocks": 65536, 00:13:03.015 "uuid": "938ec504-30a5-4d0e-8c5a-d7d318a2d094", 00:13:03.015 "assigned_rate_limits": { 00:13:03.015 "rw_ios_per_sec": 0, 00:13:03.015 "rw_mbytes_per_sec": 0, 00:13:03.015 "r_mbytes_per_sec": 0, 00:13:03.015 "w_mbytes_per_sec": 0 00:13:03.015 }, 00:13:03.015 "claimed": false, 00:13:03.015 "zoned": false, 00:13:03.015 "supported_io_types": { 00:13:03.015 "read": true, 00:13:03.015 "write": true, 00:13:03.015 "unmap": true, 00:13:03.015 "flush": true, 00:13:03.015 "reset": true, 00:13:03.015 "nvme_admin": false, 00:13:03.015 "nvme_io": false, 00:13:03.015 "nvme_io_md": false, 00:13:03.015 "write_zeroes": true, 00:13:03.015 "zcopy": true, 00:13:03.015 "get_zone_info": false, 00:13:03.015 "zone_management": false, 00:13:03.015 "zone_append": false, 00:13:03.015 "compare": false, 00:13:03.015 "compare_and_write": false, 00:13:03.015 "abort": true, 00:13:03.015 "seek_hole": false, 00:13:03.015 "seek_data": false, 00:13:03.015 "copy": true, 00:13:03.015 "nvme_iov_md": false 00:13:03.015 }, 00:13:03.015 "memory_domains": [ 00:13:03.015 { 00:13:03.015 "dma_device_id": "system", 00:13:03.015 "dma_device_type": 1 00:13:03.015 }, 00:13:03.015 { 00:13:03.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.015 "dma_device_type": 2 00:13:03.015 } 00:13:03.015 ], 00:13:03.015 "driver_specific": {} 00:13:03.015 } 00:13:03.015 ] 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.015 BaseBdev4 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.015 [ 00:13:03.015 { 00:13:03.015 "name": "BaseBdev4", 00:13:03.015 "aliases": [ 00:13:03.015 "0ad4bdf7-8798-4108-86c3-2f3f68d72d36" 00:13:03.015 ], 00:13:03.015 "product_name": "Malloc disk", 00:13:03.015 "block_size": 512, 00:13:03.015 "num_blocks": 65536, 00:13:03.015 "uuid": "0ad4bdf7-8798-4108-86c3-2f3f68d72d36", 00:13:03.015 "assigned_rate_limits": { 00:13:03.015 "rw_ios_per_sec": 0, 00:13:03.015 "rw_mbytes_per_sec": 0, 00:13:03.015 "r_mbytes_per_sec": 0, 00:13:03.015 "w_mbytes_per_sec": 0 00:13:03.015 }, 00:13:03.015 "claimed": false, 00:13:03.015 "zoned": false, 00:13:03.015 "supported_io_types": { 00:13:03.015 "read": true, 00:13:03.015 "write": true, 00:13:03.015 "unmap": true, 00:13:03.015 "flush": true, 00:13:03.015 "reset": true, 00:13:03.015 "nvme_admin": false, 00:13:03.015 "nvme_io": false, 00:13:03.015 "nvme_io_md": false, 00:13:03.015 "write_zeroes": true, 00:13:03.015 "zcopy": true, 00:13:03.015 "get_zone_info": false, 00:13:03.015 "zone_management": false, 00:13:03.015 "zone_append": false, 00:13:03.015 "compare": false, 00:13:03.015 "compare_and_write": false, 00:13:03.015 "abort": true, 00:13:03.015 "seek_hole": false, 00:13:03.015 "seek_data": false, 00:13:03.015 "copy": true, 00:13:03.015 "nvme_iov_md": false 00:13:03.015 }, 00:13:03.015 "memory_domains": [ 00:13:03.015 { 00:13:03.015 "dma_device_id": "system", 00:13:03.015 "dma_device_type": 1 00:13:03.015 }, 00:13:03.015 { 00:13:03.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.015 "dma_device_type": 2 00:13:03.015 } 00:13:03.015 ], 00:13:03.015 "driver_specific": {} 00:13:03.015 } 00:13:03.015 ] 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.015 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.015 [2024-10-17 20:09:48.659773] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:03.015 [2024-10-17 20:09:48.659914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:03.016 [2024-10-17 20:09:48.659964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.016 [2024-10-17 20:09:48.662658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.016 [2024-10-17 20:09:48.662728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.274 "name": "Existed_Raid", 00:13:03.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.274 "strip_size_kb": 64, 00:13:03.274 "state": "configuring", 00:13:03.274 "raid_level": "concat", 00:13:03.274 "superblock": false, 00:13:03.274 "num_base_bdevs": 4, 00:13:03.274 "num_base_bdevs_discovered": 3, 00:13:03.274 "num_base_bdevs_operational": 4, 00:13:03.274 "base_bdevs_list": [ 00:13:03.274 { 00:13:03.274 "name": "BaseBdev1", 00:13:03.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.274 "is_configured": false, 00:13:03.274 "data_offset": 0, 00:13:03.274 "data_size": 0 00:13:03.274 }, 00:13:03.274 { 00:13:03.274 "name": "BaseBdev2", 00:13:03.274 "uuid": "daf9615a-0e73-4143-9ecf-0be92eca9d1c", 00:13:03.274 "is_configured": true, 00:13:03.274 "data_offset": 0, 00:13:03.274 "data_size": 65536 00:13:03.274 }, 00:13:03.274 { 00:13:03.274 "name": "BaseBdev3", 00:13:03.274 "uuid": "938ec504-30a5-4d0e-8c5a-d7d318a2d094", 00:13:03.274 "is_configured": true, 00:13:03.274 "data_offset": 0, 00:13:03.274 "data_size": 65536 00:13:03.274 }, 00:13:03.274 { 00:13:03.274 "name": "BaseBdev4", 00:13:03.274 "uuid": "0ad4bdf7-8798-4108-86c3-2f3f68d72d36", 00:13:03.274 "is_configured": true, 00:13:03.274 "data_offset": 0, 00:13:03.274 "data_size": 65536 00:13:03.274 } 00:13:03.274 ] 00:13:03.274 }' 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.274 20:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.840 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:03.840 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.840 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.841 [2024-10-17 20:09:49.195900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.841 "name": "Existed_Raid", 00:13:03.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.841 "strip_size_kb": 64, 00:13:03.841 "state": "configuring", 00:13:03.841 "raid_level": "concat", 00:13:03.841 "superblock": false, 00:13:03.841 "num_base_bdevs": 4, 00:13:03.841 "num_base_bdevs_discovered": 2, 00:13:03.841 "num_base_bdevs_operational": 4, 00:13:03.841 "base_bdevs_list": [ 00:13:03.841 { 00:13:03.841 "name": "BaseBdev1", 00:13:03.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.841 "is_configured": false, 00:13:03.841 "data_offset": 0, 00:13:03.841 "data_size": 0 00:13:03.841 }, 00:13:03.841 { 00:13:03.841 "name": null, 00:13:03.841 "uuid": "daf9615a-0e73-4143-9ecf-0be92eca9d1c", 00:13:03.841 "is_configured": false, 00:13:03.841 "data_offset": 0, 00:13:03.841 "data_size": 65536 00:13:03.841 }, 00:13:03.841 { 00:13:03.841 "name": "BaseBdev3", 00:13:03.841 "uuid": "938ec504-30a5-4d0e-8c5a-d7d318a2d094", 00:13:03.841 "is_configured": true, 00:13:03.841 "data_offset": 0, 00:13:03.841 "data_size": 65536 00:13:03.841 }, 00:13:03.841 { 00:13:03.841 "name": "BaseBdev4", 00:13:03.841 "uuid": "0ad4bdf7-8798-4108-86c3-2f3f68d72d36", 00:13:03.841 "is_configured": true, 00:13:03.841 "data_offset": 0, 00:13:03.841 "data_size": 65536 00:13:03.841 } 00:13:03.841 ] 00:13:03.841 }' 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.841 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.159 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.159 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:04.159 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.159 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.159 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.159 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:04.159 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:04.159 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.159 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 [2024-10-17 20:09:49.821595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.417 BaseBdev1 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.417 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 [ 00:13:04.417 { 00:13:04.417 "name": "BaseBdev1", 00:13:04.417 "aliases": [ 00:13:04.417 "78f3152e-8a8c-45e8-9b27-6b8f23d1ecf6" 00:13:04.417 ], 00:13:04.417 "product_name": "Malloc disk", 00:13:04.417 "block_size": 512, 00:13:04.417 "num_blocks": 65536, 00:13:04.417 "uuid": "78f3152e-8a8c-45e8-9b27-6b8f23d1ecf6", 00:13:04.417 "assigned_rate_limits": { 00:13:04.417 "rw_ios_per_sec": 0, 00:13:04.417 "rw_mbytes_per_sec": 0, 00:13:04.417 "r_mbytes_per_sec": 0, 00:13:04.417 "w_mbytes_per_sec": 0 00:13:04.417 }, 00:13:04.417 "claimed": true, 00:13:04.417 "claim_type": "exclusive_write", 00:13:04.417 "zoned": false, 00:13:04.417 "supported_io_types": { 00:13:04.417 "read": true, 00:13:04.417 "write": true, 00:13:04.417 "unmap": true, 00:13:04.417 "flush": true, 00:13:04.417 "reset": true, 00:13:04.417 "nvme_admin": false, 00:13:04.418 "nvme_io": false, 00:13:04.418 "nvme_io_md": false, 00:13:04.418 "write_zeroes": true, 00:13:04.418 "zcopy": true, 00:13:04.418 "get_zone_info": false, 00:13:04.418 "zone_management": false, 00:13:04.418 "zone_append": false, 00:13:04.418 "compare": false, 00:13:04.418 "compare_and_write": false, 00:13:04.418 "abort": true, 00:13:04.418 "seek_hole": false, 00:13:04.418 "seek_data": false, 00:13:04.418 "copy": true, 00:13:04.418 "nvme_iov_md": false 00:13:04.418 }, 00:13:04.418 "memory_domains": [ 00:13:04.418 { 00:13:04.418 "dma_device_id": "system", 00:13:04.418 "dma_device_type": 1 00:13:04.418 }, 00:13:04.418 { 00:13:04.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.418 "dma_device_type": 2 00:13:04.418 } 00:13:04.418 ], 00:13:04.418 "driver_specific": {} 00:13:04.418 } 00:13:04.418 ] 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.418 "name": "Existed_Raid", 00:13:04.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.418 "strip_size_kb": 64, 00:13:04.418 "state": "configuring", 00:13:04.418 "raid_level": "concat", 00:13:04.418 "superblock": false, 00:13:04.418 "num_base_bdevs": 4, 00:13:04.418 "num_base_bdevs_discovered": 3, 00:13:04.418 "num_base_bdevs_operational": 4, 00:13:04.418 "base_bdevs_list": [ 00:13:04.418 { 00:13:04.418 "name": "BaseBdev1", 00:13:04.418 "uuid": "78f3152e-8a8c-45e8-9b27-6b8f23d1ecf6", 00:13:04.418 "is_configured": true, 00:13:04.418 "data_offset": 0, 00:13:04.418 "data_size": 65536 00:13:04.418 }, 00:13:04.418 { 00:13:04.418 "name": null, 00:13:04.418 "uuid": "daf9615a-0e73-4143-9ecf-0be92eca9d1c", 00:13:04.418 "is_configured": false, 00:13:04.418 "data_offset": 0, 00:13:04.418 "data_size": 65536 00:13:04.418 }, 00:13:04.418 { 00:13:04.418 "name": "BaseBdev3", 00:13:04.418 "uuid": "938ec504-30a5-4d0e-8c5a-d7d318a2d094", 00:13:04.418 "is_configured": true, 00:13:04.418 "data_offset": 0, 00:13:04.418 "data_size": 65536 00:13:04.418 }, 00:13:04.418 { 00:13:04.418 "name": "BaseBdev4", 00:13:04.418 "uuid": "0ad4bdf7-8798-4108-86c3-2f3f68d72d36", 00:13:04.418 "is_configured": true, 00:13:04.418 "data_offset": 0, 00:13:04.418 "data_size": 65536 00:13:04.418 } 00:13:04.418 ] 00:13:04.418 }' 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.418 20:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.984 [2024-10-17 20:09:50.437819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.984 "name": "Existed_Raid", 00:13:04.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.984 "strip_size_kb": 64, 00:13:04.984 "state": "configuring", 00:13:04.984 "raid_level": "concat", 00:13:04.984 "superblock": false, 00:13:04.984 "num_base_bdevs": 4, 00:13:04.984 "num_base_bdevs_discovered": 2, 00:13:04.984 "num_base_bdevs_operational": 4, 00:13:04.984 "base_bdevs_list": [ 00:13:04.984 { 00:13:04.984 "name": "BaseBdev1", 00:13:04.984 "uuid": "78f3152e-8a8c-45e8-9b27-6b8f23d1ecf6", 00:13:04.984 "is_configured": true, 00:13:04.984 "data_offset": 0, 00:13:04.984 "data_size": 65536 00:13:04.984 }, 00:13:04.984 { 00:13:04.984 "name": null, 00:13:04.984 "uuid": "daf9615a-0e73-4143-9ecf-0be92eca9d1c", 00:13:04.984 "is_configured": false, 00:13:04.984 "data_offset": 0, 00:13:04.984 "data_size": 65536 00:13:04.984 }, 00:13:04.984 { 00:13:04.984 "name": null, 00:13:04.984 "uuid": "938ec504-30a5-4d0e-8c5a-d7d318a2d094", 00:13:04.984 "is_configured": false, 00:13:04.984 "data_offset": 0, 00:13:04.984 "data_size": 65536 00:13:04.984 }, 00:13:04.984 { 00:13:04.984 "name": "BaseBdev4", 00:13:04.984 "uuid": "0ad4bdf7-8798-4108-86c3-2f3f68d72d36", 00:13:04.984 "is_configured": true, 00:13:04.984 "data_offset": 0, 00:13:04.984 "data_size": 65536 00:13:04.984 } 00:13:04.984 ] 00:13:04.984 }' 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.984 20:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.550 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.550 20:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:05.550 20:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.550 20:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.550 20:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.550 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:05.550 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:05.550 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.550 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.550 [2024-10-17 20:09:51.034119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.550 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.550 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:05.550 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.550 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.550 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.550 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.550 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.550 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.551 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.551 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.551 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.551 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.551 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.551 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.551 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.551 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.551 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.551 "name": "Existed_Raid", 00:13:05.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.551 "strip_size_kb": 64, 00:13:05.551 "state": "configuring", 00:13:05.551 "raid_level": "concat", 00:13:05.551 "superblock": false, 00:13:05.551 "num_base_bdevs": 4, 00:13:05.551 "num_base_bdevs_discovered": 3, 00:13:05.551 "num_base_bdevs_operational": 4, 00:13:05.551 "base_bdevs_list": [ 00:13:05.551 { 00:13:05.551 "name": "BaseBdev1", 00:13:05.551 "uuid": "78f3152e-8a8c-45e8-9b27-6b8f23d1ecf6", 00:13:05.551 "is_configured": true, 00:13:05.551 "data_offset": 0, 00:13:05.551 "data_size": 65536 00:13:05.551 }, 00:13:05.551 { 00:13:05.551 "name": null, 00:13:05.551 "uuid": "daf9615a-0e73-4143-9ecf-0be92eca9d1c", 00:13:05.551 "is_configured": false, 00:13:05.551 "data_offset": 0, 00:13:05.551 "data_size": 65536 00:13:05.551 }, 00:13:05.551 { 00:13:05.551 "name": "BaseBdev3", 00:13:05.551 "uuid": "938ec504-30a5-4d0e-8c5a-d7d318a2d094", 00:13:05.551 "is_configured": true, 00:13:05.551 "data_offset": 0, 00:13:05.551 "data_size": 65536 00:13:05.551 }, 00:13:05.551 { 00:13:05.551 "name": "BaseBdev4", 00:13:05.551 "uuid": "0ad4bdf7-8798-4108-86c3-2f3f68d72d36", 00:13:05.551 "is_configured": true, 00:13:05.551 "data_offset": 0, 00:13:05.551 "data_size": 65536 00:13:05.551 } 00:13:05.551 ] 00:13:05.551 }' 00:13:05.551 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.551 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.117 [2024-10-17 20:09:51.614308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.117 "name": "Existed_Raid", 00:13:06.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.117 "strip_size_kb": 64, 00:13:06.117 "state": "configuring", 00:13:06.117 "raid_level": "concat", 00:13:06.117 "superblock": false, 00:13:06.117 "num_base_bdevs": 4, 00:13:06.117 "num_base_bdevs_discovered": 2, 00:13:06.117 "num_base_bdevs_operational": 4, 00:13:06.117 "base_bdevs_list": [ 00:13:06.117 { 00:13:06.117 "name": null, 00:13:06.117 "uuid": "78f3152e-8a8c-45e8-9b27-6b8f23d1ecf6", 00:13:06.117 "is_configured": false, 00:13:06.117 "data_offset": 0, 00:13:06.117 "data_size": 65536 00:13:06.117 }, 00:13:06.117 { 00:13:06.117 "name": null, 00:13:06.117 "uuid": "daf9615a-0e73-4143-9ecf-0be92eca9d1c", 00:13:06.117 "is_configured": false, 00:13:06.117 "data_offset": 0, 00:13:06.117 "data_size": 65536 00:13:06.117 }, 00:13:06.117 { 00:13:06.117 "name": "BaseBdev3", 00:13:06.117 "uuid": "938ec504-30a5-4d0e-8c5a-d7d318a2d094", 00:13:06.117 "is_configured": true, 00:13:06.117 "data_offset": 0, 00:13:06.117 "data_size": 65536 00:13:06.117 }, 00:13:06.117 { 00:13:06.117 "name": "BaseBdev4", 00:13:06.117 "uuid": "0ad4bdf7-8798-4108-86c3-2f3f68d72d36", 00:13:06.117 "is_configured": true, 00:13:06.117 "data_offset": 0, 00:13:06.117 "data_size": 65536 00:13:06.117 } 00:13:06.117 ] 00:13:06.117 }' 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.117 20:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.685 [2024-10-17 20:09:52.260105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.685 "name": "Existed_Raid", 00:13:06.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.685 "strip_size_kb": 64, 00:13:06.685 "state": "configuring", 00:13:06.685 "raid_level": "concat", 00:13:06.685 "superblock": false, 00:13:06.685 "num_base_bdevs": 4, 00:13:06.685 "num_base_bdevs_discovered": 3, 00:13:06.685 "num_base_bdevs_operational": 4, 00:13:06.685 "base_bdevs_list": [ 00:13:06.685 { 00:13:06.685 "name": null, 00:13:06.685 "uuid": "78f3152e-8a8c-45e8-9b27-6b8f23d1ecf6", 00:13:06.685 "is_configured": false, 00:13:06.685 "data_offset": 0, 00:13:06.685 "data_size": 65536 00:13:06.685 }, 00:13:06.685 { 00:13:06.685 "name": "BaseBdev2", 00:13:06.685 "uuid": "daf9615a-0e73-4143-9ecf-0be92eca9d1c", 00:13:06.685 "is_configured": true, 00:13:06.685 "data_offset": 0, 00:13:06.685 "data_size": 65536 00:13:06.685 }, 00:13:06.685 { 00:13:06.685 "name": "BaseBdev3", 00:13:06.685 "uuid": "938ec504-30a5-4d0e-8c5a-d7d318a2d094", 00:13:06.685 "is_configured": true, 00:13:06.685 "data_offset": 0, 00:13:06.685 "data_size": 65536 00:13:06.685 }, 00:13:06.685 { 00:13:06.685 "name": "BaseBdev4", 00:13:06.685 "uuid": "0ad4bdf7-8798-4108-86c3-2f3f68d72d36", 00:13:06.685 "is_configured": true, 00:13:06.685 "data_offset": 0, 00:13:06.685 "data_size": 65536 00:13:06.685 } 00:13:06.685 ] 00:13:06.685 }' 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.685 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 78f3152e-8a8c-45e8-9b27-6b8f23d1ecf6 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.253 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.512 [2024-10-17 20:09:52.942201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:07.512 [2024-10-17 20:09:52.942262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:07.512 [2024-10-17 20:09:52.942274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:07.512 [2024-10-17 20:09:52.942593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:07.512 [2024-10-17 20:09:52.942767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:07.512 [2024-10-17 20:09:52.942787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:07.512 [2024-10-17 20:09:52.943107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.512 NewBaseBdev 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.512 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.512 [ 00:13:07.512 { 00:13:07.512 "name": "NewBaseBdev", 00:13:07.512 "aliases": [ 00:13:07.512 "78f3152e-8a8c-45e8-9b27-6b8f23d1ecf6" 00:13:07.512 ], 00:13:07.512 "product_name": "Malloc disk", 00:13:07.512 "block_size": 512, 00:13:07.512 "num_blocks": 65536, 00:13:07.512 "uuid": "78f3152e-8a8c-45e8-9b27-6b8f23d1ecf6", 00:13:07.512 "assigned_rate_limits": { 00:13:07.512 "rw_ios_per_sec": 0, 00:13:07.512 "rw_mbytes_per_sec": 0, 00:13:07.512 "r_mbytes_per_sec": 0, 00:13:07.512 "w_mbytes_per_sec": 0 00:13:07.512 }, 00:13:07.512 "claimed": true, 00:13:07.512 "claim_type": "exclusive_write", 00:13:07.512 "zoned": false, 00:13:07.512 "supported_io_types": { 00:13:07.512 "read": true, 00:13:07.512 "write": true, 00:13:07.512 "unmap": true, 00:13:07.512 "flush": true, 00:13:07.512 "reset": true, 00:13:07.512 "nvme_admin": false, 00:13:07.512 "nvme_io": false, 00:13:07.512 "nvme_io_md": false, 00:13:07.512 "write_zeroes": true, 00:13:07.512 "zcopy": true, 00:13:07.512 "get_zone_info": false, 00:13:07.512 "zone_management": false, 00:13:07.512 "zone_append": false, 00:13:07.512 "compare": false, 00:13:07.512 "compare_and_write": false, 00:13:07.513 "abort": true, 00:13:07.513 "seek_hole": false, 00:13:07.513 "seek_data": false, 00:13:07.513 "copy": true, 00:13:07.513 "nvme_iov_md": false 00:13:07.513 }, 00:13:07.513 "memory_domains": [ 00:13:07.513 { 00:13:07.513 "dma_device_id": "system", 00:13:07.513 "dma_device_type": 1 00:13:07.513 }, 00:13:07.513 { 00:13:07.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.513 "dma_device_type": 2 00:13:07.513 } 00:13:07.513 ], 00:13:07.513 "driver_specific": {} 00:13:07.513 } 00:13:07.513 ] 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.513 20:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.513 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.513 "name": "Existed_Raid", 00:13:07.513 "uuid": "f4ce2d68-b5c1-483f-b78e-183b99d3d0e5", 00:13:07.513 "strip_size_kb": 64, 00:13:07.513 "state": "online", 00:13:07.513 "raid_level": "concat", 00:13:07.513 "superblock": false, 00:13:07.513 "num_base_bdevs": 4, 00:13:07.513 "num_base_bdevs_discovered": 4, 00:13:07.513 "num_base_bdevs_operational": 4, 00:13:07.513 "base_bdevs_list": [ 00:13:07.513 { 00:13:07.513 "name": "NewBaseBdev", 00:13:07.513 "uuid": "78f3152e-8a8c-45e8-9b27-6b8f23d1ecf6", 00:13:07.513 "is_configured": true, 00:13:07.513 "data_offset": 0, 00:13:07.513 "data_size": 65536 00:13:07.513 }, 00:13:07.513 { 00:13:07.513 "name": "BaseBdev2", 00:13:07.513 "uuid": "daf9615a-0e73-4143-9ecf-0be92eca9d1c", 00:13:07.513 "is_configured": true, 00:13:07.513 "data_offset": 0, 00:13:07.513 "data_size": 65536 00:13:07.513 }, 00:13:07.513 { 00:13:07.513 "name": "BaseBdev3", 00:13:07.513 "uuid": "938ec504-30a5-4d0e-8c5a-d7d318a2d094", 00:13:07.513 "is_configured": true, 00:13:07.513 "data_offset": 0, 00:13:07.513 "data_size": 65536 00:13:07.513 }, 00:13:07.513 { 00:13:07.513 "name": "BaseBdev4", 00:13:07.513 "uuid": "0ad4bdf7-8798-4108-86c3-2f3f68d72d36", 00:13:07.513 "is_configured": true, 00:13:07.513 "data_offset": 0, 00:13:07.513 "data_size": 65536 00:13:07.513 } 00:13:07.513 ] 00:13:07.513 }' 00:13:07.513 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.513 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.080 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:08.080 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:08.080 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:08.080 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:08.080 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:08.080 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.081 [2024-10-17 20:09:53.518885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:08.081 "name": "Existed_Raid", 00:13:08.081 "aliases": [ 00:13:08.081 "f4ce2d68-b5c1-483f-b78e-183b99d3d0e5" 00:13:08.081 ], 00:13:08.081 "product_name": "Raid Volume", 00:13:08.081 "block_size": 512, 00:13:08.081 "num_blocks": 262144, 00:13:08.081 "uuid": "f4ce2d68-b5c1-483f-b78e-183b99d3d0e5", 00:13:08.081 "assigned_rate_limits": { 00:13:08.081 "rw_ios_per_sec": 0, 00:13:08.081 "rw_mbytes_per_sec": 0, 00:13:08.081 "r_mbytes_per_sec": 0, 00:13:08.081 "w_mbytes_per_sec": 0 00:13:08.081 }, 00:13:08.081 "claimed": false, 00:13:08.081 "zoned": false, 00:13:08.081 "supported_io_types": { 00:13:08.081 "read": true, 00:13:08.081 "write": true, 00:13:08.081 "unmap": true, 00:13:08.081 "flush": true, 00:13:08.081 "reset": true, 00:13:08.081 "nvme_admin": false, 00:13:08.081 "nvme_io": false, 00:13:08.081 "nvme_io_md": false, 00:13:08.081 "write_zeroes": true, 00:13:08.081 "zcopy": false, 00:13:08.081 "get_zone_info": false, 00:13:08.081 "zone_management": false, 00:13:08.081 "zone_append": false, 00:13:08.081 "compare": false, 00:13:08.081 "compare_and_write": false, 00:13:08.081 "abort": false, 00:13:08.081 "seek_hole": false, 00:13:08.081 "seek_data": false, 00:13:08.081 "copy": false, 00:13:08.081 "nvme_iov_md": false 00:13:08.081 }, 00:13:08.081 "memory_domains": [ 00:13:08.081 { 00:13:08.081 "dma_device_id": "system", 00:13:08.081 "dma_device_type": 1 00:13:08.081 }, 00:13:08.081 { 00:13:08.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.081 "dma_device_type": 2 00:13:08.081 }, 00:13:08.081 { 00:13:08.081 "dma_device_id": "system", 00:13:08.081 "dma_device_type": 1 00:13:08.081 }, 00:13:08.081 { 00:13:08.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.081 "dma_device_type": 2 00:13:08.081 }, 00:13:08.081 { 00:13:08.081 "dma_device_id": "system", 00:13:08.081 "dma_device_type": 1 00:13:08.081 }, 00:13:08.081 { 00:13:08.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.081 "dma_device_type": 2 00:13:08.081 }, 00:13:08.081 { 00:13:08.081 "dma_device_id": "system", 00:13:08.081 "dma_device_type": 1 00:13:08.081 }, 00:13:08.081 { 00:13:08.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.081 "dma_device_type": 2 00:13:08.081 } 00:13:08.081 ], 00:13:08.081 "driver_specific": { 00:13:08.081 "raid": { 00:13:08.081 "uuid": "f4ce2d68-b5c1-483f-b78e-183b99d3d0e5", 00:13:08.081 "strip_size_kb": 64, 00:13:08.081 "state": "online", 00:13:08.081 "raid_level": "concat", 00:13:08.081 "superblock": false, 00:13:08.081 "num_base_bdevs": 4, 00:13:08.081 "num_base_bdevs_discovered": 4, 00:13:08.081 "num_base_bdevs_operational": 4, 00:13:08.081 "base_bdevs_list": [ 00:13:08.081 { 00:13:08.081 "name": "NewBaseBdev", 00:13:08.081 "uuid": "78f3152e-8a8c-45e8-9b27-6b8f23d1ecf6", 00:13:08.081 "is_configured": true, 00:13:08.081 "data_offset": 0, 00:13:08.081 "data_size": 65536 00:13:08.081 }, 00:13:08.081 { 00:13:08.081 "name": "BaseBdev2", 00:13:08.081 "uuid": "daf9615a-0e73-4143-9ecf-0be92eca9d1c", 00:13:08.081 "is_configured": true, 00:13:08.081 "data_offset": 0, 00:13:08.081 "data_size": 65536 00:13:08.081 }, 00:13:08.081 { 00:13:08.081 "name": "BaseBdev3", 00:13:08.081 "uuid": "938ec504-30a5-4d0e-8c5a-d7d318a2d094", 00:13:08.081 "is_configured": true, 00:13:08.081 "data_offset": 0, 00:13:08.081 "data_size": 65536 00:13:08.081 }, 00:13:08.081 { 00:13:08.081 "name": "BaseBdev4", 00:13:08.081 "uuid": "0ad4bdf7-8798-4108-86c3-2f3f68d72d36", 00:13:08.081 "is_configured": true, 00:13:08.081 "data_offset": 0, 00:13:08.081 "data_size": 65536 00:13:08.081 } 00:13:08.081 ] 00:13:08.081 } 00:13:08.081 } 00:13:08.081 }' 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:08.081 BaseBdev2 00:13:08.081 BaseBdev3 00:13:08.081 BaseBdev4' 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.081 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.340 [2024-10-17 20:09:53.926583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:08.340 [2024-10-17 20:09:53.926773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.340 [2024-10-17 20:09:53.926989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.340 [2024-10-17 20:09:53.927216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.340 [2024-10-17 20:09:53.927244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71301 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71301 ']' 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71301 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71301 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.340 killing process with pid 71301 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71301' 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71301 00:13:08.340 [2024-10-17 20:09:53.966966] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.340 20:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71301 00:13:08.906 [2024-10-17 20:09:54.310336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.841 ************************************ 00:13:09.841 END TEST raid_state_function_test 00:13:09.841 ************************************ 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:09.841 00:13:09.841 real 0m13.002s 00:13:09.841 user 0m21.673s 00:13:09.841 sys 0m1.827s 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.841 20:09:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:13:09.841 20:09:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:09.841 20:09:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.841 20:09:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.841 ************************************ 00:13:09.841 START TEST raid_state_function_test_sb 00:13:09.841 ************************************ 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:09.841 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:09.842 Process raid pid: 71989 00:13:09.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71989 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71989' 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71989 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 71989 ']' 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.842 20:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.100 [2024-10-17 20:09:55.507485] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:13:10.101 [2024-10-17 20:09:55.507938] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.101 [2024-10-17 20:09:55.686072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.367 [2024-10-17 20:09:55.843260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.642 [2024-10-17 20:09:56.081105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.642 [2024-10-17 20:09:56.081381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.900 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.900 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:10.900 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:10.900 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.900 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.900 [2024-10-17 20:09:56.443332] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:10.901 [2024-10-17 20:09:56.443413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:10.901 [2024-10-17 20:09:56.443453] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.901 [2024-10-17 20:09:56.443492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.901 [2024-10-17 20:09:56.443506] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:10.901 [2024-10-17 20:09:56.443527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:10.901 [2024-10-17 20:09:56.443542] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:10.901 [2024-10-17 20:09:56.443562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.901 "name": "Existed_Raid", 00:13:10.901 "uuid": "878c0c5e-3c44-4fa2-a3ea-4859d9d279ae", 00:13:10.901 "strip_size_kb": 64, 00:13:10.901 "state": "configuring", 00:13:10.901 "raid_level": "concat", 00:13:10.901 "superblock": true, 00:13:10.901 "num_base_bdevs": 4, 00:13:10.901 "num_base_bdevs_discovered": 0, 00:13:10.901 "num_base_bdevs_operational": 4, 00:13:10.901 "base_bdevs_list": [ 00:13:10.901 { 00:13:10.901 "name": "BaseBdev1", 00:13:10.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.901 "is_configured": false, 00:13:10.901 "data_offset": 0, 00:13:10.901 "data_size": 0 00:13:10.901 }, 00:13:10.901 { 00:13:10.901 "name": "BaseBdev2", 00:13:10.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.901 "is_configured": false, 00:13:10.901 "data_offset": 0, 00:13:10.901 "data_size": 0 00:13:10.901 }, 00:13:10.901 { 00:13:10.901 "name": "BaseBdev3", 00:13:10.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.901 "is_configured": false, 00:13:10.901 "data_offset": 0, 00:13:10.901 "data_size": 0 00:13:10.901 }, 00:13:10.901 { 00:13:10.901 "name": "BaseBdev4", 00:13:10.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.901 "is_configured": false, 00:13:10.901 "data_offset": 0, 00:13:10.901 "data_size": 0 00:13:10.901 } 00:13:10.901 ] 00:13:10.901 }' 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.901 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.469 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:11.469 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.469 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.469 [2024-10-17 20:09:56.983446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.469 [2024-10-17 20:09:56.983492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:11.469 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.469 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:11.469 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.469 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.469 [2024-10-17 20:09:56.991433] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:11.469 [2024-10-17 20:09:56.991502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:11.469 [2024-10-17 20:09:56.991525] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:11.469 [2024-10-17 20:09:56.991541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:11.469 [2024-10-17 20:09:56.991551] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:11.469 [2024-10-17 20:09:56.991565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:11.469 [2024-10-17 20:09:56.991575] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:11.469 [2024-10-17 20:09:56.991589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:11.469 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.469 20:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:11.469 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.469 20:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.469 [2024-10-17 20:09:57.036306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.469 BaseBdev1 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.469 [ 00:13:11.469 { 00:13:11.469 "name": "BaseBdev1", 00:13:11.469 "aliases": [ 00:13:11.469 "bf666b97-f7a8-4b47-ac65-4fde1ee8a986" 00:13:11.469 ], 00:13:11.469 "product_name": "Malloc disk", 00:13:11.469 "block_size": 512, 00:13:11.469 "num_blocks": 65536, 00:13:11.469 "uuid": "bf666b97-f7a8-4b47-ac65-4fde1ee8a986", 00:13:11.469 "assigned_rate_limits": { 00:13:11.469 "rw_ios_per_sec": 0, 00:13:11.469 "rw_mbytes_per_sec": 0, 00:13:11.469 "r_mbytes_per_sec": 0, 00:13:11.469 "w_mbytes_per_sec": 0 00:13:11.469 }, 00:13:11.469 "claimed": true, 00:13:11.469 "claim_type": "exclusive_write", 00:13:11.469 "zoned": false, 00:13:11.469 "supported_io_types": { 00:13:11.469 "read": true, 00:13:11.469 "write": true, 00:13:11.469 "unmap": true, 00:13:11.469 "flush": true, 00:13:11.469 "reset": true, 00:13:11.469 "nvme_admin": false, 00:13:11.469 "nvme_io": false, 00:13:11.469 "nvme_io_md": false, 00:13:11.469 "write_zeroes": true, 00:13:11.469 "zcopy": true, 00:13:11.469 "get_zone_info": false, 00:13:11.469 "zone_management": false, 00:13:11.469 "zone_append": false, 00:13:11.469 "compare": false, 00:13:11.469 "compare_and_write": false, 00:13:11.469 "abort": true, 00:13:11.469 "seek_hole": false, 00:13:11.469 "seek_data": false, 00:13:11.469 "copy": true, 00:13:11.469 "nvme_iov_md": false 00:13:11.469 }, 00:13:11.469 "memory_domains": [ 00:13:11.469 { 00:13:11.469 "dma_device_id": "system", 00:13:11.469 "dma_device_type": 1 00:13:11.469 }, 00:13:11.469 { 00:13:11.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.469 "dma_device_type": 2 00:13:11.469 } 00:13:11.469 ], 00:13:11.469 "driver_specific": {} 00:13:11.469 } 00:13:11.469 ] 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.469 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.728 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.728 "name": "Existed_Raid", 00:13:11.728 "uuid": "7875de9b-3865-420b-8e70-3b8d11f03b92", 00:13:11.728 "strip_size_kb": 64, 00:13:11.728 "state": "configuring", 00:13:11.728 "raid_level": "concat", 00:13:11.728 "superblock": true, 00:13:11.728 "num_base_bdevs": 4, 00:13:11.728 "num_base_bdevs_discovered": 1, 00:13:11.728 "num_base_bdevs_operational": 4, 00:13:11.728 "base_bdevs_list": [ 00:13:11.728 { 00:13:11.728 "name": "BaseBdev1", 00:13:11.728 "uuid": "bf666b97-f7a8-4b47-ac65-4fde1ee8a986", 00:13:11.728 "is_configured": true, 00:13:11.728 "data_offset": 2048, 00:13:11.728 "data_size": 63488 00:13:11.728 }, 00:13:11.728 { 00:13:11.728 "name": "BaseBdev2", 00:13:11.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.728 "is_configured": false, 00:13:11.728 "data_offset": 0, 00:13:11.728 "data_size": 0 00:13:11.728 }, 00:13:11.728 { 00:13:11.728 "name": "BaseBdev3", 00:13:11.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.728 "is_configured": false, 00:13:11.728 "data_offset": 0, 00:13:11.728 "data_size": 0 00:13:11.728 }, 00:13:11.728 { 00:13:11.728 "name": "BaseBdev4", 00:13:11.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.728 "is_configured": false, 00:13:11.728 "data_offset": 0, 00:13:11.728 "data_size": 0 00:13:11.728 } 00:13:11.728 ] 00:13:11.728 }' 00:13:11.728 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.728 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.986 [2024-10-17 20:09:57.600524] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.986 [2024-10-17 20:09:57.600596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.986 [2024-10-17 20:09:57.608564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.986 [2024-10-17 20:09:57.611121] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:11.986 [2024-10-17 20:09:57.611293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:11.986 [2024-10-17 20:09:57.611412] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:11.986 [2024-10-17 20:09:57.611475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:11.986 [2024-10-17 20:09:57.611583] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:11.986 [2024-10-17 20:09:57.611642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.986 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.244 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.244 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.244 "name": "Existed_Raid", 00:13:12.244 "uuid": "ef9b114b-4ad7-461b-82b7-44e9c77d22ea", 00:13:12.244 "strip_size_kb": 64, 00:13:12.244 "state": "configuring", 00:13:12.244 "raid_level": "concat", 00:13:12.244 "superblock": true, 00:13:12.244 "num_base_bdevs": 4, 00:13:12.244 "num_base_bdevs_discovered": 1, 00:13:12.244 "num_base_bdevs_operational": 4, 00:13:12.244 "base_bdevs_list": [ 00:13:12.244 { 00:13:12.244 "name": "BaseBdev1", 00:13:12.244 "uuid": "bf666b97-f7a8-4b47-ac65-4fde1ee8a986", 00:13:12.244 "is_configured": true, 00:13:12.244 "data_offset": 2048, 00:13:12.244 "data_size": 63488 00:13:12.244 }, 00:13:12.244 { 00:13:12.244 "name": "BaseBdev2", 00:13:12.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.244 "is_configured": false, 00:13:12.244 "data_offset": 0, 00:13:12.244 "data_size": 0 00:13:12.244 }, 00:13:12.244 { 00:13:12.244 "name": "BaseBdev3", 00:13:12.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.244 "is_configured": false, 00:13:12.244 "data_offset": 0, 00:13:12.244 "data_size": 0 00:13:12.244 }, 00:13:12.244 { 00:13:12.244 "name": "BaseBdev4", 00:13:12.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.244 "is_configured": false, 00:13:12.244 "data_offset": 0, 00:13:12.244 "data_size": 0 00:13:12.244 } 00:13:12.244 ] 00:13:12.244 }' 00:13:12.244 20:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.244 20:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.502 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:12.502 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.502 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.761 [2024-10-17 20:09:58.191932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:12.761 BaseBdev2 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.761 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.761 [ 00:13:12.761 { 00:13:12.761 "name": "BaseBdev2", 00:13:12.761 "aliases": [ 00:13:12.761 "7c93c816-1f60-445b-b43c-c1bff9d42d8d" 00:13:12.761 ], 00:13:12.761 "product_name": "Malloc disk", 00:13:12.761 "block_size": 512, 00:13:12.761 "num_blocks": 65536, 00:13:12.761 "uuid": "7c93c816-1f60-445b-b43c-c1bff9d42d8d", 00:13:12.761 "assigned_rate_limits": { 00:13:12.761 "rw_ios_per_sec": 0, 00:13:12.761 "rw_mbytes_per_sec": 0, 00:13:12.761 "r_mbytes_per_sec": 0, 00:13:12.761 "w_mbytes_per_sec": 0 00:13:12.761 }, 00:13:12.761 "claimed": true, 00:13:12.761 "claim_type": "exclusive_write", 00:13:12.761 "zoned": false, 00:13:12.761 "supported_io_types": { 00:13:12.761 "read": true, 00:13:12.761 "write": true, 00:13:12.761 "unmap": true, 00:13:12.761 "flush": true, 00:13:12.761 "reset": true, 00:13:12.761 "nvme_admin": false, 00:13:12.761 "nvme_io": false, 00:13:12.761 "nvme_io_md": false, 00:13:12.761 "write_zeroes": true, 00:13:12.761 "zcopy": true, 00:13:12.761 "get_zone_info": false, 00:13:12.761 "zone_management": false, 00:13:12.761 "zone_append": false, 00:13:12.761 "compare": false, 00:13:12.761 "compare_and_write": false, 00:13:12.761 "abort": true, 00:13:12.761 "seek_hole": false, 00:13:12.761 "seek_data": false, 00:13:12.762 "copy": true, 00:13:12.762 "nvme_iov_md": false 00:13:12.762 }, 00:13:12.762 "memory_domains": [ 00:13:12.762 { 00:13:12.762 "dma_device_id": "system", 00:13:12.762 "dma_device_type": 1 00:13:12.762 }, 00:13:12.762 { 00:13:12.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.762 "dma_device_type": 2 00:13:12.762 } 00:13:12.762 ], 00:13:12.762 "driver_specific": {} 00:13:12.762 } 00:13:12.762 ] 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.762 "name": "Existed_Raid", 00:13:12.762 "uuid": "ef9b114b-4ad7-461b-82b7-44e9c77d22ea", 00:13:12.762 "strip_size_kb": 64, 00:13:12.762 "state": "configuring", 00:13:12.762 "raid_level": "concat", 00:13:12.762 "superblock": true, 00:13:12.762 "num_base_bdevs": 4, 00:13:12.762 "num_base_bdevs_discovered": 2, 00:13:12.762 "num_base_bdevs_operational": 4, 00:13:12.762 "base_bdevs_list": [ 00:13:12.762 { 00:13:12.762 "name": "BaseBdev1", 00:13:12.762 "uuid": "bf666b97-f7a8-4b47-ac65-4fde1ee8a986", 00:13:12.762 "is_configured": true, 00:13:12.762 "data_offset": 2048, 00:13:12.762 "data_size": 63488 00:13:12.762 }, 00:13:12.762 { 00:13:12.762 "name": "BaseBdev2", 00:13:12.762 "uuid": "7c93c816-1f60-445b-b43c-c1bff9d42d8d", 00:13:12.762 "is_configured": true, 00:13:12.762 "data_offset": 2048, 00:13:12.762 "data_size": 63488 00:13:12.762 }, 00:13:12.762 { 00:13:12.762 "name": "BaseBdev3", 00:13:12.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.762 "is_configured": false, 00:13:12.762 "data_offset": 0, 00:13:12.762 "data_size": 0 00:13:12.762 }, 00:13:12.762 { 00:13:12.762 "name": "BaseBdev4", 00:13:12.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.762 "is_configured": false, 00:13:12.762 "data_offset": 0, 00:13:12.762 "data_size": 0 00:13:12.762 } 00:13:12.762 ] 00:13:12.762 }' 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.762 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.329 [2024-10-17 20:09:58.834498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:13.329 BaseBdev3 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.329 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.329 [ 00:13:13.329 { 00:13:13.329 "name": "BaseBdev3", 00:13:13.329 "aliases": [ 00:13:13.329 "ce589bb8-c781-44fd-8b2d-543f8c3ed75a" 00:13:13.329 ], 00:13:13.330 "product_name": "Malloc disk", 00:13:13.330 "block_size": 512, 00:13:13.330 "num_blocks": 65536, 00:13:13.330 "uuid": "ce589bb8-c781-44fd-8b2d-543f8c3ed75a", 00:13:13.330 "assigned_rate_limits": { 00:13:13.330 "rw_ios_per_sec": 0, 00:13:13.330 "rw_mbytes_per_sec": 0, 00:13:13.330 "r_mbytes_per_sec": 0, 00:13:13.330 "w_mbytes_per_sec": 0 00:13:13.330 }, 00:13:13.330 "claimed": true, 00:13:13.330 "claim_type": "exclusive_write", 00:13:13.330 "zoned": false, 00:13:13.330 "supported_io_types": { 00:13:13.330 "read": true, 00:13:13.330 "write": true, 00:13:13.330 "unmap": true, 00:13:13.330 "flush": true, 00:13:13.330 "reset": true, 00:13:13.330 "nvme_admin": false, 00:13:13.330 "nvme_io": false, 00:13:13.330 "nvme_io_md": false, 00:13:13.330 "write_zeroes": true, 00:13:13.330 "zcopy": true, 00:13:13.330 "get_zone_info": false, 00:13:13.330 "zone_management": false, 00:13:13.330 "zone_append": false, 00:13:13.330 "compare": false, 00:13:13.330 "compare_and_write": false, 00:13:13.330 "abort": true, 00:13:13.330 "seek_hole": false, 00:13:13.330 "seek_data": false, 00:13:13.330 "copy": true, 00:13:13.330 "nvme_iov_md": false 00:13:13.330 }, 00:13:13.330 "memory_domains": [ 00:13:13.330 { 00:13:13.330 "dma_device_id": "system", 00:13:13.330 "dma_device_type": 1 00:13:13.330 }, 00:13:13.330 { 00:13:13.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.330 "dma_device_type": 2 00:13:13.330 } 00:13:13.330 ], 00:13:13.330 "driver_specific": {} 00:13:13.330 } 00:13:13.330 ] 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.330 "name": "Existed_Raid", 00:13:13.330 "uuid": "ef9b114b-4ad7-461b-82b7-44e9c77d22ea", 00:13:13.330 "strip_size_kb": 64, 00:13:13.330 "state": "configuring", 00:13:13.330 "raid_level": "concat", 00:13:13.330 "superblock": true, 00:13:13.330 "num_base_bdevs": 4, 00:13:13.330 "num_base_bdevs_discovered": 3, 00:13:13.330 "num_base_bdevs_operational": 4, 00:13:13.330 "base_bdevs_list": [ 00:13:13.330 { 00:13:13.330 "name": "BaseBdev1", 00:13:13.330 "uuid": "bf666b97-f7a8-4b47-ac65-4fde1ee8a986", 00:13:13.330 "is_configured": true, 00:13:13.330 "data_offset": 2048, 00:13:13.330 "data_size": 63488 00:13:13.330 }, 00:13:13.330 { 00:13:13.330 "name": "BaseBdev2", 00:13:13.330 "uuid": "7c93c816-1f60-445b-b43c-c1bff9d42d8d", 00:13:13.330 "is_configured": true, 00:13:13.330 "data_offset": 2048, 00:13:13.330 "data_size": 63488 00:13:13.330 }, 00:13:13.330 { 00:13:13.330 "name": "BaseBdev3", 00:13:13.330 "uuid": "ce589bb8-c781-44fd-8b2d-543f8c3ed75a", 00:13:13.330 "is_configured": true, 00:13:13.330 "data_offset": 2048, 00:13:13.330 "data_size": 63488 00:13:13.330 }, 00:13:13.330 { 00:13:13.330 "name": "BaseBdev4", 00:13:13.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.330 "is_configured": false, 00:13:13.330 "data_offset": 0, 00:13:13.330 "data_size": 0 00:13:13.330 } 00:13:13.330 ] 00:13:13.330 }' 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.330 20:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.901 [2024-10-17 20:09:59.437897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:13.901 [2024-10-17 20:09:59.438395] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:13.901 [2024-10-17 20:09:59.438421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:13.901 BaseBdev4 00:13:13.901 [2024-10-17 20:09:59.438798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:13.901 [2024-10-17 20:09:59.438993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:13.901 [2024-10-17 20:09:59.439023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:13.901 [2024-10-17 20:09:59.439216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.901 [ 00:13:13.901 { 00:13:13.901 "name": "BaseBdev4", 00:13:13.901 "aliases": [ 00:13:13.901 "464a3ea0-cb42-4015-b63b-2163afc8a0d0" 00:13:13.901 ], 00:13:13.901 "product_name": "Malloc disk", 00:13:13.901 "block_size": 512, 00:13:13.901 "num_blocks": 65536, 00:13:13.901 "uuid": "464a3ea0-cb42-4015-b63b-2163afc8a0d0", 00:13:13.901 "assigned_rate_limits": { 00:13:13.901 "rw_ios_per_sec": 0, 00:13:13.901 "rw_mbytes_per_sec": 0, 00:13:13.901 "r_mbytes_per_sec": 0, 00:13:13.901 "w_mbytes_per_sec": 0 00:13:13.901 }, 00:13:13.901 "claimed": true, 00:13:13.901 "claim_type": "exclusive_write", 00:13:13.901 "zoned": false, 00:13:13.901 "supported_io_types": { 00:13:13.901 "read": true, 00:13:13.901 "write": true, 00:13:13.901 "unmap": true, 00:13:13.901 "flush": true, 00:13:13.901 "reset": true, 00:13:13.901 "nvme_admin": false, 00:13:13.901 "nvme_io": false, 00:13:13.901 "nvme_io_md": false, 00:13:13.901 "write_zeroes": true, 00:13:13.901 "zcopy": true, 00:13:13.901 "get_zone_info": false, 00:13:13.901 "zone_management": false, 00:13:13.901 "zone_append": false, 00:13:13.901 "compare": false, 00:13:13.901 "compare_and_write": false, 00:13:13.901 "abort": true, 00:13:13.901 "seek_hole": false, 00:13:13.901 "seek_data": false, 00:13:13.901 "copy": true, 00:13:13.901 "nvme_iov_md": false 00:13:13.901 }, 00:13:13.901 "memory_domains": [ 00:13:13.901 { 00:13:13.901 "dma_device_id": "system", 00:13:13.901 "dma_device_type": 1 00:13:13.901 }, 00:13:13.901 { 00:13:13.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.901 "dma_device_type": 2 00:13:13.901 } 00:13:13.901 ], 00:13:13.901 "driver_specific": {} 00:13:13.901 } 00:13:13.901 ] 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.901 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.901 "name": "Existed_Raid", 00:13:13.901 "uuid": "ef9b114b-4ad7-461b-82b7-44e9c77d22ea", 00:13:13.901 "strip_size_kb": 64, 00:13:13.901 "state": "online", 00:13:13.901 "raid_level": "concat", 00:13:13.901 "superblock": true, 00:13:13.901 "num_base_bdevs": 4, 00:13:13.901 "num_base_bdevs_discovered": 4, 00:13:13.901 "num_base_bdevs_operational": 4, 00:13:13.901 "base_bdevs_list": [ 00:13:13.901 { 00:13:13.901 "name": "BaseBdev1", 00:13:13.901 "uuid": "bf666b97-f7a8-4b47-ac65-4fde1ee8a986", 00:13:13.901 "is_configured": true, 00:13:13.901 "data_offset": 2048, 00:13:13.901 "data_size": 63488 00:13:13.901 }, 00:13:13.901 { 00:13:13.901 "name": "BaseBdev2", 00:13:13.901 "uuid": "7c93c816-1f60-445b-b43c-c1bff9d42d8d", 00:13:13.901 "is_configured": true, 00:13:13.901 "data_offset": 2048, 00:13:13.902 "data_size": 63488 00:13:13.902 }, 00:13:13.902 { 00:13:13.902 "name": "BaseBdev3", 00:13:13.902 "uuid": "ce589bb8-c781-44fd-8b2d-543f8c3ed75a", 00:13:13.902 "is_configured": true, 00:13:13.902 "data_offset": 2048, 00:13:13.902 "data_size": 63488 00:13:13.902 }, 00:13:13.902 { 00:13:13.902 "name": "BaseBdev4", 00:13:13.902 "uuid": "464a3ea0-cb42-4015-b63b-2163afc8a0d0", 00:13:13.902 "is_configured": true, 00:13:13.902 "data_offset": 2048, 00:13:13.902 "data_size": 63488 00:13:13.902 } 00:13:13.902 ] 00:13:13.902 }' 00:13:13.902 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.902 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.468 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:14.468 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:14.468 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:14.468 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:14.468 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:14.468 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:14.468 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:14.468 20:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:14.468 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.468 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.468 [2024-10-17 20:09:59.974598] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.468 20:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.468 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:14.468 "name": "Existed_Raid", 00:13:14.468 "aliases": [ 00:13:14.468 "ef9b114b-4ad7-461b-82b7-44e9c77d22ea" 00:13:14.468 ], 00:13:14.468 "product_name": "Raid Volume", 00:13:14.468 "block_size": 512, 00:13:14.468 "num_blocks": 253952, 00:13:14.468 "uuid": "ef9b114b-4ad7-461b-82b7-44e9c77d22ea", 00:13:14.468 "assigned_rate_limits": { 00:13:14.468 "rw_ios_per_sec": 0, 00:13:14.468 "rw_mbytes_per_sec": 0, 00:13:14.468 "r_mbytes_per_sec": 0, 00:13:14.468 "w_mbytes_per_sec": 0 00:13:14.468 }, 00:13:14.468 "claimed": false, 00:13:14.468 "zoned": false, 00:13:14.468 "supported_io_types": { 00:13:14.468 "read": true, 00:13:14.468 "write": true, 00:13:14.468 "unmap": true, 00:13:14.468 "flush": true, 00:13:14.468 "reset": true, 00:13:14.468 "nvme_admin": false, 00:13:14.468 "nvme_io": false, 00:13:14.468 "nvme_io_md": false, 00:13:14.468 "write_zeroes": true, 00:13:14.468 "zcopy": false, 00:13:14.468 "get_zone_info": false, 00:13:14.468 "zone_management": false, 00:13:14.468 "zone_append": false, 00:13:14.468 "compare": false, 00:13:14.468 "compare_and_write": false, 00:13:14.468 "abort": false, 00:13:14.468 "seek_hole": false, 00:13:14.468 "seek_data": false, 00:13:14.468 "copy": false, 00:13:14.468 "nvme_iov_md": false 00:13:14.468 }, 00:13:14.468 "memory_domains": [ 00:13:14.468 { 00:13:14.468 "dma_device_id": "system", 00:13:14.468 "dma_device_type": 1 00:13:14.468 }, 00:13:14.468 { 00:13:14.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.468 "dma_device_type": 2 00:13:14.468 }, 00:13:14.468 { 00:13:14.468 "dma_device_id": "system", 00:13:14.468 "dma_device_type": 1 00:13:14.468 }, 00:13:14.468 { 00:13:14.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.468 "dma_device_type": 2 00:13:14.468 }, 00:13:14.468 { 00:13:14.468 "dma_device_id": "system", 00:13:14.468 "dma_device_type": 1 00:13:14.468 }, 00:13:14.468 { 00:13:14.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.468 "dma_device_type": 2 00:13:14.468 }, 00:13:14.468 { 00:13:14.468 "dma_device_id": "system", 00:13:14.468 "dma_device_type": 1 00:13:14.468 }, 00:13:14.468 { 00:13:14.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.468 "dma_device_type": 2 00:13:14.468 } 00:13:14.468 ], 00:13:14.468 "driver_specific": { 00:13:14.468 "raid": { 00:13:14.468 "uuid": "ef9b114b-4ad7-461b-82b7-44e9c77d22ea", 00:13:14.468 "strip_size_kb": 64, 00:13:14.468 "state": "online", 00:13:14.468 "raid_level": "concat", 00:13:14.468 "superblock": true, 00:13:14.469 "num_base_bdevs": 4, 00:13:14.469 "num_base_bdevs_discovered": 4, 00:13:14.469 "num_base_bdevs_operational": 4, 00:13:14.469 "base_bdevs_list": [ 00:13:14.469 { 00:13:14.469 "name": "BaseBdev1", 00:13:14.469 "uuid": "bf666b97-f7a8-4b47-ac65-4fde1ee8a986", 00:13:14.469 "is_configured": true, 00:13:14.469 "data_offset": 2048, 00:13:14.469 "data_size": 63488 00:13:14.469 }, 00:13:14.469 { 00:13:14.469 "name": "BaseBdev2", 00:13:14.469 "uuid": "7c93c816-1f60-445b-b43c-c1bff9d42d8d", 00:13:14.469 "is_configured": true, 00:13:14.469 "data_offset": 2048, 00:13:14.469 "data_size": 63488 00:13:14.469 }, 00:13:14.469 { 00:13:14.469 "name": "BaseBdev3", 00:13:14.469 "uuid": "ce589bb8-c781-44fd-8b2d-543f8c3ed75a", 00:13:14.469 "is_configured": true, 00:13:14.469 "data_offset": 2048, 00:13:14.469 "data_size": 63488 00:13:14.469 }, 00:13:14.469 { 00:13:14.469 "name": "BaseBdev4", 00:13:14.469 "uuid": "464a3ea0-cb42-4015-b63b-2163afc8a0d0", 00:13:14.469 "is_configured": true, 00:13:14.469 "data_offset": 2048, 00:13:14.469 "data_size": 63488 00:13:14.469 } 00:13:14.469 ] 00:13:14.469 } 00:13:14.469 } 00:13:14.469 }' 00:13:14.469 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:14.469 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:14.469 BaseBdev2 00:13:14.469 BaseBdev3 00:13:14.469 BaseBdev4' 00:13:14.469 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.727 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:14.727 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.727 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:14.727 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.728 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.728 [2024-10-17 20:10:00.338391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:14.728 [2024-10-17 20:10:00.338461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.728 [2024-10-17 20:10:00.338526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.986 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.986 "name": "Existed_Raid", 00:13:14.986 "uuid": "ef9b114b-4ad7-461b-82b7-44e9c77d22ea", 00:13:14.986 "strip_size_kb": 64, 00:13:14.986 "state": "offline", 00:13:14.986 "raid_level": "concat", 00:13:14.986 "superblock": true, 00:13:14.986 "num_base_bdevs": 4, 00:13:14.986 "num_base_bdevs_discovered": 3, 00:13:14.986 "num_base_bdevs_operational": 3, 00:13:14.986 "base_bdevs_list": [ 00:13:14.986 { 00:13:14.986 "name": null, 00:13:14.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.986 "is_configured": false, 00:13:14.986 "data_offset": 0, 00:13:14.986 "data_size": 63488 00:13:14.986 }, 00:13:14.986 { 00:13:14.986 "name": "BaseBdev2", 00:13:14.987 "uuid": "7c93c816-1f60-445b-b43c-c1bff9d42d8d", 00:13:14.987 "is_configured": true, 00:13:14.987 "data_offset": 2048, 00:13:14.987 "data_size": 63488 00:13:14.987 }, 00:13:14.987 { 00:13:14.987 "name": "BaseBdev3", 00:13:14.987 "uuid": "ce589bb8-c781-44fd-8b2d-543f8c3ed75a", 00:13:14.987 "is_configured": true, 00:13:14.987 "data_offset": 2048, 00:13:14.987 "data_size": 63488 00:13:14.987 }, 00:13:14.987 { 00:13:14.987 "name": "BaseBdev4", 00:13:14.987 "uuid": "464a3ea0-cb42-4015-b63b-2163afc8a0d0", 00:13:14.987 "is_configured": true, 00:13:14.987 "data_offset": 2048, 00:13:14.987 "data_size": 63488 00:13:14.987 } 00:13:14.987 ] 00:13:14.987 }' 00:13:14.987 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.987 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.555 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:15.555 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.555 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.555 20:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:15.555 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.555 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.555 20:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.555 [2024-10-17 20:10:01.006742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.555 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.555 [2024-10-17 20:10:01.153970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.813 [2024-10-17 20:10:01.298572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:15.813 [2024-10-17 20:10:01.298633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.813 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.814 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.814 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:15.814 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:15.814 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:15.814 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:15.814 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:15.814 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:15.814 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.814 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.072 BaseBdev2 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.072 [ 00:13:16.072 { 00:13:16.072 "name": "BaseBdev2", 00:13:16.072 "aliases": [ 00:13:16.072 "719a0c53-9c83-4ef3-b318-ddc170689dbf" 00:13:16.072 ], 00:13:16.072 "product_name": "Malloc disk", 00:13:16.072 "block_size": 512, 00:13:16.072 "num_blocks": 65536, 00:13:16.072 "uuid": "719a0c53-9c83-4ef3-b318-ddc170689dbf", 00:13:16.072 "assigned_rate_limits": { 00:13:16.072 "rw_ios_per_sec": 0, 00:13:16.072 "rw_mbytes_per_sec": 0, 00:13:16.072 "r_mbytes_per_sec": 0, 00:13:16.072 "w_mbytes_per_sec": 0 00:13:16.072 }, 00:13:16.072 "claimed": false, 00:13:16.072 "zoned": false, 00:13:16.072 "supported_io_types": { 00:13:16.072 "read": true, 00:13:16.072 "write": true, 00:13:16.072 "unmap": true, 00:13:16.072 "flush": true, 00:13:16.072 "reset": true, 00:13:16.072 "nvme_admin": false, 00:13:16.072 "nvme_io": false, 00:13:16.072 "nvme_io_md": false, 00:13:16.072 "write_zeroes": true, 00:13:16.072 "zcopy": true, 00:13:16.072 "get_zone_info": false, 00:13:16.072 "zone_management": false, 00:13:16.072 "zone_append": false, 00:13:16.072 "compare": false, 00:13:16.072 "compare_and_write": false, 00:13:16.072 "abort": true, 00:13:16.072 "seek_hole": false, 00:13:16.072 "seek_data": false, 00:13:16.072 "copy": true, 00:13:16.072 "nvme_iov_md": false 00:13:16.072 }, 00:13:16.072 "memory_domains": [ 00:13:16.072 { 00:13:16.072 "dma_device_id": "system", 00:13:16.072 "dma_device_type": 1 00:13:16.072 }, 00:13:16.072 { 00:13:16.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.072 "dma_device_type": 2 00:13:16.072 } 00:13:16.072 ], 00:13:16.072 "driver_specific": {} 00:13:16.072 } 00:13:16.072 ] 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.072 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.073 BaseBdev3 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.073 [ 00:13:16.073 { 00:13:16.073 "name": "BaseBdev3", 00:13:16.073 "aliases": [ 00:13:16.073 "943dafba-2cd2-4dd9-9237-f8a97ab1d183" 00:13:16.073 ], 00:13:16.073 "product_name": "Malloc disk", 00:13:16.073 "block_size": 512, 00:13:16.073 "num_blocks": 65536, 00:13:16.073 "uuid": "943dafba-2cd2-4dd9-9237-f8a97ab1d183", 00:13:16.073 "assigned_rate_limits": { 00:13:16.073 "rw_ios_per_sec": 0, 00:13:16.073 "rw_mbytes_per_sec": 0, 00:13:16.073 "r_mbytes_per_sec": 0, 00:13:16.073 "w_mbytes_per_sec": 0 00:13:16.073 }, 00:13:16.073 "claimed": false, 00:13:16.073 "zoned": false, 00:13:16.073 "supported_io_types": { 00:13:16.073 "read": true, 00:13:16.073 "write": true, 00:13:16.073 "unmap": true, 00:13:16.073 "flush": true, 00:13:16.073 "reset": true, 00:13:16.073 "nvme_admin": false, 00:13:16.073 "nvme_io": false, 00:13:16.073 "nvme_io_md": false, 00:13:16.073 "write_zeroes": true, 00:13:16.073 "zcopy": true, 00:13:16.073 "get_zone_info": false, 00:13:16.073 "zone_management": false, 00:13:16.073 "zone_append": false, 00:13:16.073 "compare": false, 00:13:16.073 "compare_and_write": false, 00:13:16.073 "abort": true, 00:13:16.073 "seek_hole": false, 00:13:16.073 "seek_data": false, 00:13:16.073 "copy": true, 00:13:16.073 "nvme_iov_md": false 00:13:16.073 }, 00:13:16.073 "memory_domains": [ 00:13:16.073 { 00:13:16.073 "dma_device_id": "system", 00:13:16.073 "dma_device_type": 1 00:13:16.073 }, 00:13:16.073 { 00:13:16.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.073 "dma_device_type": 2 00:13:16.073 } 00:13:16.073 ], 00:13:16.073 "driver_specific": {} 00:13:16.073 } 00:13:16.073 ] 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.073 BaseBdev4 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.073 [ 00:13:16.073 { 00:13:16.073 "name": "BaseBdev4", 00:13:16.073 "aliases": [ 00:13:16.073 "c1553712-1819-48a4-87c9-94d7e8465594" 00:13:16.073 ], 00:13:16.073 "product_name": "Malloc disk", 00:13:16.073 "block_size": 512, 00:13:16.073 "num_blocks": 65536, 00:13:16.073 "uuid": "c1553712-1819-48a4-87c9-94d7e8465594", 00:13:16.073 "assigned_rate_limits": { 00:13:16.073 "rw_ios_per_sec": 0, 00:13:16.073 "rw_mbytes_per_sec": 0, 00:13:16.073 "r_mbytes_per_sec": 0, 00:13:16.073 "w_mbytes_per_sec": 0 00:13:16.073 }, 00:13:16.073 "claimed": false, 00:13:16.073 "zoned": false, 00:13:16.073 "supported_io_types": { 00:13:16.073 "read": true, 00:13:16.073 "write": true, 00:13:16.073 "unmap": true, 00:13:16.073 "flush": true, 00:13:16.073 "reset": true, 00:13:16.073 "nvme_admin": false, 00:13:16.073 "nvme_io": false, 00:13:16.073 "nvme_io_md": false, 00:13:16.073 "write_zeroes": true, 00:13:16.073 "zcopy": true, 00:13:16.073 "get_zone_info": false, 00:13:16.073 "zone_management": false, 00:13:16.073 "zone_append": false, 00:13:16.073 "compare": false, 00:13:16.073 "compare_and_write": false, 00:13:16.073 "abort": true, 00:13:16.073 "seek_hole": false, 00:13:16.073 "seek_data": false, 00:13:16.073 "copy": true, 00:13:16.073 "nvme_iov_md": false 00:13:16.073 }, 00:13:16.073 "memory_domains": [ 00:13:16.073 { 00:13:16.073 "dma_device_id": "system", 00:13:16.073 "dma_device_type": 1 00:13:16.073 }, 00:13:16.073 { 00:13:16.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.073 "dma_device_type": 2 00:13:16.073 } 00:13:16.073 ], 00:13:16.073 "driver_specific": {} 00:13:16.073 } 00:13:16.073 ] 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:16.073 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.074 [2024-10-17 20:10:01.668178] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:16.074 [2024-10-17 20:10:01.668249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:16.074 [2024-10-17 20:10:01.668287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.074 [2024-10-17 20:10:01.670774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.074 [2024-10-17 20:10:01.671033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.074 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.332 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.332 "name": "Existed_Raid", 00:13:16.332 "uuid": "b7cd228d-861a-4c15-bb0c-d1e7513994fb", 00:13:16.332 "strip_size_kb": 64, 00:13:16.332 "state": "configuring", 00:13:16.332 "raid_level": "concat", 00:13:16.332 "superblock": true, 00:13:16.332 "num_base_bdevs": 4, 00:13:16.332 "num_base_bdevs_discovered": 3, 00:13:16.332 "num_base_bdevs_operational": 4, 00:13:16.332 "base_bdevs_list": [ 00:13:16.332 { 00:13:16.332 "name": "BaseBdev1", 00:13:16.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.332 "is_configured": false, 00:13:16.332 "data_offset": 0, 00:13:16.332 "data_size": 0 00:13:16.332 }, 00:13:16.332 { 00:13:16.332 "name": "BaseBdev2", 00:13:16.332 "uuid": "719a0c53-9c83-4ef3-b318-ddc170689dbf", 00:13:16.332 "is_configured": true, 00:13:16.332 "data_offset": 2048, 00:13:16.332 "data_size": 63488 00:13:16.332 }, 00:13:16.332 { 00:13:16.332 "name": "BaseBdev3", 00:13:16.332 "uuid": "943dafba-2cd2-4dd9-9237-f8a97ab1d183", 00:13:16.332 "is_configured": true, 00:13:16.332 "data_offset": 2048, 00:13:16.332 "data_size": 63488 00:13:16.332 }, 00:13:16.332 { 00:13:16.332 "name": "BaseBdev4", 00:13:16.332 "uuid": "c1553712-1819-48a4-87c9-94d7e8465594", 00:13:16.332 "is_configured": true, 00:13:16.332 "data_offset": 2048, 00:13:16.332 "data_size": 63488 00:13:16.332 } 00:13:16.332 ] 00:13:16.332 }' 00:13:16.332 20:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.332 20:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.611 [2024-10-17 20:10:02.212324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.611 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.869 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.869 "name": "Existed_Raid", 00:13:16.869 "uuid": "b7cd228d-861a-4c15-bb0c-d1e7513994fb", 00:13:16.869 "strip_size_kb": 64, 00:13:16.869 "state": "configuring", 00:13:16.869 "raid_level": "concat", 00:13:16.869 "superblock": true, 00:13:16.869 "num_base_bdevs": 4, 00:13:16.869 "num_base_bdevs_discovered": 2, 00:13:16.869 "num_base_bdevs_operational": 4, 00:13:16.869 "base_bdevs_list": [ 00:13:16.869 { 00:13:16.869 "name": "BaseBdev1", 00:13:16.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.869 "is_configured": false, 00:13:16.869 "data_offset": 0, 00:13:16.869 "data_size": 0 00:13:16.869 }, 00:13:16.869 { 00:13:16.869 "name": null, 00:13:16.869 "uuid": "719a0c53-9c83-4ef3-b318-ddc170689dbf", 00:13:16.869 "is_configured": false, 00:13:16.869 "data_offset": 0, 00:13:16.869 "data_size": 63488 00:13:16.869 }, 00:13:16.869 { 00:13:16.869 "name": "BaseBdev3", 00:13:16.869 "uuid": "943dafba-2cd2-4dd9-9237-f8a97ab1d183", 00:13:16.869 "is_configured": true, 00:13:16.869 "data_offset": 2048, 00:13:16.869 "data_size": 63488 00:13:16.869 }, 00:13:16.869 { 00:13:16.869 "name": "BaseBdev4", 00:13:16.869 "uuid": "c1553712-1819-48a4-87c9-94d7e8465594", 00:13:16.869 "is_configured": true, 00:13:16.869 "data_offset": 2048, 00:13:16.869 "data_size": 63488 00:13:16.869 } 00:13:16.869 ] 00:13:16.869 }' 00:13:16.869 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.869 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.126 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.126 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.126 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:17.126 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.126 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.385 [2024-10-17 20:10:02.838505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.385 BaseBdev1 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.385 [ 00:13:17.385 { 00:13:17.385 "name": "BaseBdev1", 00:13:17.385 "aliases": [ 00:13:17.385 "0ec7e07e-f7b6-43a4-aa82-ba45eb13768a" 00:13:17.385 ], 00:13:17.385 "product_name": "Malloc disk", 00:13:17.385 "block_size": 512, 00:13:17.385 "num_blocks": 65536, 00:13:17.385 "uuid": "0ec7e07e-f7b6-43a4-aa82-ba45eb13768a", 00:13:17.385 "assigned_rate_limits": { 00:13:17.385 "rw_ios_per_sec": 0, 00:13:17.385 "rw_mbytes_per_sec": 0, 00:13:17.385 "r_mbytes_per_sec": 0, 00:13:17.385 "w_mbytes_per_sec": 0 00:13:17.385 }, 00:13:17.385 "claimed": true, 00:13:17.385 "claim_type": "exclusive_write", 00:13:17.385 "zoned": false, 00:13:17.385 "supported_io_types": { 00:13:17.385 "read": true, 00:13:17.385 "write": true, 00:13:17.385 "unmap": true, 00:13:17.385 "flush": true, 00:13:17.385 "reset": true, 00:13:17.385 "nvme_admin": false, 00:13:17.385 "nvme_io": false, 00:13:17.385 "nvme_io_md": false, 00:13:17.385 "write_zeroes": true, 00:13:17.385 "zcopy": true, 00:13:17.385 "get_zone_info": false, 00:13:17.385 "zone_management": false, 00:13:17.385 "zone_append": false, 00:13:17.385 "compare": false, 00:13:17.385 "compare_and_write": false, 00:13:17.385 "abort": true, 00:13:17.385 "seek_hole": false, 00:13:17.385 "seek_data": false, 00:13:17.385 "copy": true, 00:13:17.385 "nvme_iov_md": false 00:13:17.385 }, 00:13:17.385 "memory_domains": [ 00:13:17.385 { 00:13:17.385 "dma_device_id": "system", 00:13:17.385 "dma_device_type": 1 00:13:17.385 }, 00:13:17.385 { 00:13:17.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.385 "dma_device_type": 2 00:13:17.385 } 00:13:17.385 ], 00:13:17.385 "driver_specific": {} 00:13:17.385 } 00:13:17.385 ] 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.385 "name": "Existed_Raid", 00:13:17.385 "uuid": "b7cd228d-861a-4c15-bb0c-d1e7513994fb", 00:13:17.385 "strip_size_kb": 64, 00:13:17.385 "state": "configuring", 00:13:17.385 "raid_level": "concat", 00:13:17.385 "superblock": true, 00:13:17.385 "num_base_bdevs": 4, 00:13:17.385 "num_base_bdevs_discovered": 3, 00:13:17.385 "num_base_bdevs_operational": 4, 00:13:17.385 "base_bdevs_list": [ 00:13:17.385 { 00:13:17.385 "name": "BaseBdev1", 00:13:17.385 "uuid": "0ec7e07e-f7b6-43a4-aa82-ba45eb13768a", 00:13:17.385 "is_configured": true, 00:13:17.385 "data_offset": 2048, 00:13:17.385 "data_size": 63488 00:13:17.385 }, 00:13:17.385 { 00:13:17.385 "name": null, 00:13:17.385 "uuid": "719a0c53-9c83-4ef3-b318-ddc170689dbf", 00:13:17.385 "is_configured": false, 00:13:17.385 "data_offset": 0, 00:13:17.385 "data_size": 63488 00:13:17.385 }, 00:13:17.385 { 00:13:17.385 "name": "BaseBdev3", 00:13:17.385 "uuid": "943dafba-2cd2-4dd9-9237-f8a97ab1d183", 00:13:17.385 "is_configured": true, 00:13:17.385 "data_offset": 2048, 00:13:17.385 "data_size": 63488 00:13:17.385 }, 00:13:17.385 { 00:13:17.385 "name": "BaseBdev4", 00:13:17.385 "uuid": "c1553712-1819-48a4-87c9-94d7e8465594", 00:13:17.385 "is_configured": true, 00:13:17.385 "data_offset": 2048, 00:13:17.385 "data_size": 63488 00:13:17.385 } 00:13:17.385 ] 00:13:17.385 }' 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.385 20:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.952 [2024-10-17 20:10:03.470733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.952 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.953 20:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.953 20:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.953 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.953 "name": "Existed_Raid", 00:13:17.953 "uuid": "b7cd228d-861a-4c15-bb0c-d1e7513994fb", 00:13:17.953 "strip_size_kb": 64, 00:13:17.953 "state": "configuring", 00:13:17.953 "raid_level": "concat", 00:13:17.953 "superblock": true, 00:13:17.953 "num_base_bdevs": 4, 00:13:17.953 "num_base_bdevs_discovered": 2, 00:13:17.953 "num_base_bdevs_operational": 4, 00:13:17.953 "base_bdevs_list": [ 00:13:17.953 { 00:13:17.953 "name": "BaseBdev1", 00:13:17.953 "uuid": "0ec7e07e-f7b6-43a4-aa82-ba45eb13768a", 00:13:17.953 "is_configured": true, 00:13:17.953 "data_offset": 2048, 00:13:17.953 "data_size": 63488 00:13:17.953 }, 00:13:17.953 { 00:13:17.953 "name": null, 00:13:17.953 "uuid": "719a0c53-9c83-4ef3-b318-ddc170689dbf", 00:13:17.953 "is_configured": false, 00:13:17.953 "data_offset": 0, 00:13:17.953 "data_size": 63488 00:13:17.953 }, 00:13:17.953 { 00:13:17.953 "name": null, 00:13:17.953 "uuid": "943dafba-2cd2-4dd9-9237-f8a97ab1d183", 00:13:17.953 "is_configured": false, 00:13:17.953 "data_offset": 0, 00:13:17.953 "data_size": 63488 00:13:17.953 }, 00:13:17.953 { 00:13:17.953 "name": "BaseBdev4", 00:13:17.953 "uuid": "c1553712-1819-48a4-87c9-94d7e8465594", 00:13:17.953 "is_configured": true, 00:13:17.953 "data_offset": 2048, 00:13:17.953 "data_size": 63488 00:13:17.953 } 00:13:17.953 ] 00:13:17.953 }' 00:13:17.953 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.953 20:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.519 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.519 20:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.519 20:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.519 20:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.519 [2024-10-17 20:10:04.054919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.519 "name": "Existed_Raid", 00:13:18.519 "uuid": "b7cd228d-861a-4c15-bb0c-d1e7513994fb", 00:13:18.519 "strip_size_kb": 64, 00:13:18.519 "state": "configuring", 00:13:18.519 "raid_level": "concat", 00:13:18.519 "superblock": true, 00:13:18.519 "num_base_bdevs": 4, 00:13:18.519 "num_base_bdevs_discovered": 3, 00:13:18.519 "num_base_bdevs_operational": 4, 00:13:18.519 "base_bdevs_list": [ 00:13:18.519 { 00:13:18.519 "name": "BaseBdev1", 00:13:18.519 "uuid": "0ec7e07e-f7b6-43a4-aa82-ba45eb13768a", 00:13:18.519 "is_configured": true, 00:13:18.519 "data_offset": 2048, 00:13:18.519 "data_size": 63488 00:13:18.519 }, 00:13:18.519 { 00:13:18.519 "name": null, 00:13:18.519 "uuid": "719a0c53-9c83-4ef3-b318-ddc170689dbf", 00:13:18.519 "is_configured": false, 00:13:18.519 "data_offset": 0, 00:13:18.519 "data_size": 63488 00:13:18.519 }, 00:13:18.519 { 00:13:18.519 "name": "BaseBdev3", 00:13:18.519 "uuid": "943dafba-2cd2-4dd9-9237-f8a97ab1d183", 00:13:18.519 "is_configured": true, 00:13:18.519 "data_offset": 2048, 00:13:18.519 "data_size": 63488 00:13:18.519 }, 00:13:18.519 { 00:13:18.519 "name": "BaseBdev4", 00:13:18.519 "uuid": "c1553712-1819-48a4-87c9-94d7e8465594", 00:13:18.519 "is_configured": true, 00:13:18.519 "data_offset": 2048, 00:13:18.519 "data_size": 63488 00:13:18.519 } 00:13:18.519 ] 00:13:18.519 }' 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.519 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.085 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.085 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.085 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:19.085 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.085 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.085 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:19.085 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:19.085 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.085 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.085 [2024-10-17 20:10:04.663131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.344 "name": "Existed_Raid", 00:13:19.344 "uuid": "b7cd228d-861a-4c15-bb0c-d1e7513994fb", 00:13:19.344 "strip_size_kb": 64, 00:13:19.344 "state": "configuring", 00:13:19.344 "raid_level": "concat", 00:13:19.344 "superblock": true, 00:13:19.344 "num_base_bdevs": 4, 00:13:19.344 "num_base_bdevs_discovered": 2, 00:13:19.344 "num_base_bdevs_operational": 4, 00:13:19.344 "base_bdevs_list": [ 00:13:19.344 { 00:13:19.344 "name": null, 00:13:19.344 "uuid": "0ec7e07e-f7b6-43a4-aa82-ba45eb13768a", 00:13:19.344 "is_configured": false, 00:13:19.344 "data_offset": 0, 00:13:19.344 "data_size": 63488 00:13:19.344 }, 00:13:19.344 { 00:13:19.344 "name": null, 00:13:19.344 "uuid": "719a0c53-9c83-4ef3-b318-ddc170689dbf", 00:13:19.344 "is_configured": false, 00:13:19.344 "data_offset": 0, 00:13:19.344 "data_size": 63488 00:13:19.344 }, 00:13:19.344 { 00:13:19.344 "name": "BaseBdev3", 00:13:19.344 "uuid": "943dafba-2cd2-4dd9-9237-f8a97ab1d183", 00:13:19.344 "is_configured": true, 00:13:19.344 "data_offset": 2048, 00:13:19.344 "data_size": 63488 00:13:19.344 }, 00:13:19.344 { 00:13:19.344 "name": "BaseBdev4", 00:13:19.344 "uuid": "c1553712-1819-48a4-87c9-94d7e8465594", 00:13:19.344 "is_configured": true, 00:13:19.344 "data_offset": 2048, 00:13:19.344 "data_size": 63488 00:13:19.344 } 00:13:19.344 ] 00:13:19.344 }' 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.344 20:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.910 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.910 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:19.910 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.910 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.910 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.910 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:19.910 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:19.910 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.910 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.910 [2024-10-17 20:10:05.321726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.910 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.910 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:19.910 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.910 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.911 "name": "Existed_Raid", 00:13:19.911 "uuid": "b7cd228d-861a-4c15-bb0c-d1e7513994fb", 00:13:19.911 "strip_size_kb": 64, 00:13:19.911 "state": "configuring", 00:13:19.911 "raid_level": "concat", 00:13:19.911 "superblock": true, 00:13:19.911 "num_base_bdevs": 4, 00:13:19.911 "num_base_bdevs_discovered": 3, 00:13:19.911 "num_base_bdevs_operational": 4, 00:13:19.911 "base_bdevs_list": [ 00:13:19.911 { 00:13:19.911 "name": null, 00:13:19.911 "uuid": "0ec7e07e-f7b6-43a4-aa82-ba45eb13768a", 00:13:19.911 "is_configured": false, 00:13:19.911 "data_offset": 0, 00:13:19.911 "data_size": 63488 00:13:19.911 }, 00:13:19.911 { 00:13:19.911 "name": "BaseBdev2", 00:13:19.911 "uuid": "719a0c53-9c83-4ef3-b318-ddc170689dbf", 00:13:19.911 "is_configured": true, 00:13:19.911 "data_offset": 2048, 00:13:19.911 "data_size": 63488 00:13:19.911 }, 00:13:19.911 { 00:13:19.911 "name": "BaseBdev3", 00:13:19.911 "uuid": "943dafba-2cd2-4dd9-9237-f8a97ab1d183", 00:13:19.911 "is_configured": true, 00:13:19.911 "data_offset": 2048, 00:13:19.911 "data_size": 63488 00:13:19.911 }, 00:13:19.911 { 00:13:19.911 "name": "BaseBdev4", 00:13:19.911 "uuid": "c1553712-1819-48a4-87c9-94d7e8465594", 00:13:19.911 "is_configured": true, 00:13:19.911 "data_offset": 2048, 00:13:19.911 "data_size": 63488 00:13:19.911 } 00:13:19.911 ] 00:13:19.911 }' 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.911 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0ec7e07e-f7b6-43a4-aa82-ba45eb13768a 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.477 [2024-10-17 20:10:05.996657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:20.477 [2024-10-17 20:10:05.997198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:20.477 [2024-10-17 20:10:05.997223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:20.477 NewBaseBdev 00:13:20.477 [2024-10-17 20:10:05.997589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:20.477 [2024-10-17 20:10:05.997771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:20.477 [2024-10-17 20:10:05.997805] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:20.477 [2024-10-17 20:10:05.997961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.477 20:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.477 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.477 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:20.477 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.477 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.477 [ 00:13:20.477 { 00:13:20.477 "name": "NewBaseBdev", 00:13:20.477 "aliases": [ 00:13:20.477 "0ec7e07e-f7b6-43a4-aa82-ba45eb13768a" 00:13:20.477 ], 00:13:20.477 "product_name": "Malloc disk", 00:13:20.477 "block_size": 512, 00:13:20.477 "num_blocks": 65536, 00:13:20.477 "uuid": "0ec7e07e-f7b6-43a4-aa82-ba45eb13768a", 00:13:20.477 "assigned_rate_limits": { 00:13:20.477 "rw_ios_per_sec": 0, 00:13:20.477 "rw_mbytes_per_sec": 0, 00:13:20.477 "r_mbytes_per_sec": 0, 00:13:20.477 "w_mbytes_per_sec": 0 00:13:20.477 }, 00:13:20.477 "claimed": true, 00:13:20.477 "claim_type": "exclusive_write", 00:13:20.477 "zoned": false, 00:13:20.477 "supported_io_types": { 00:13:20.477 "read": true, 00:13:20.477 "write": true, 00:13:20.477 "unmap": true, 00:13:20.477 "flush": true, 00:13:20.477 "reset": true, 00:13:20.477 "nvme_admin": false, 00:13:20.478 "nvme_io": false, 00:13:20.478 "nvme_io_md": false, 00:13:20.478 "write_zeroes": true, 00:13:20.478 "zcopy": true, 00:13:20.478 "get_zone_info": false, 00:13:20.478 "zone_management": false, 00:13:20.478 "zone_append": false, 00:13:20.478 "compare": false, 00:13:20.478 "compare_and_write": false, 00:13:20.478 "abort": true, 00:13:20.478 "seek_hole": false, 00:13:20.478 "seek_data": false, 00:13:20.478 "copy": true, 00:13:20.478 "nvme_iov_md": false 00:13:20.478 }, 00:13:20.478 "memory_domains": [ 00:13:20.478 { 00:13:20.478 "dma_device_id": "system", 00:13:20.478 "dma_device_type": 1 00:13:20.478 }, 00:13:20.478 { 00:13:20.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.478 "dma_device_type": 2 00:13:20.478 } 00:13:20.478 ], 00:13:20.478 "driver_specific": {} 00:13:20.478 } 00:13:20.478 ] 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.478 "name": "Existed_Raid", 00:13:20.478 "uuid": "b7cd228d-861a-4c15-bb0c-d1e7513994fb", 00:13:20.478 "strip_size_kb": 64, 00:13:20.478 "state": "online", 00:13:20.478 "raid_level": "concat", 00:13:20.478 "superblock": true, 00:13:20.478 "num_base_bdevs": 4, 00:13:20.478 "num_base_bdevs_discovered": 4, 00:13:20.478 "num_base_bdevs_operational": 4, 00:13:20.478 "base_bdevs_list": [ 00:13:20.478 { 00:13:20.478 "name": "NewBaseBdev", 00:13:20.478 "uuid": "0ec7e07e-f7b6-43a4-aa82-ba45eb13768a", 00:13:20.478 "is_configured": true, 00:13:20.478 "data_offset": 2048, 00:13:20.478 "data_size": 63488 00:13:20.478 }, 00:13:20.478 { 00:13:20.478 "name": "BaseBdev2", 00:13:20.478 "uuid": "719a0c53-9c83-4ef3-b318-ddc170689dbf", 00:13:20.478 "is_configured": true, 00:13:20.478 "data_offset": 2048, 00:13:20.478 "data_size": 63488 00:13:20.478 }, 00:13:20.478 { 00:13:20.478 "name": "BaseBdev3", 00:13:20.478 "uuid": "943dafba-2cd2-4dd9-9237-f8a97ab1d183", 00:13:20.478 "is_configured": true, 00:13:20.478 "data_offset": 2048, 00:13:20.478 "data_size": 63488 00:13:20.478 }, 00:13:20.478 { 00:13:20.478 "name": "BaseBdev4", 00:13:20.478 "uuid": "c1553712-1819-48a4-87c9-94d7e8465594", 00:13:20.478 "is_configured": true, 00:13:20.478 "data_offset": 2048, 00:13:20.478 "data_size": 63488 00:13:20.478 } 00:13:20.478 ] 00:13:20.478 }' 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.478 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.045 [2024-10-17 20:10:06.549331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:21.045 "name": "Existed_Raid", 00:13:21.045 "aliases": [ 00:13:21.045 "b7cd228d-861a-4c15-bb0c-d1e7513994fb" 00:13:21.045 ], 00:13:21.045 "product_name": "Raid Volume", 00:13:21.045 "block_size": 512, 00:13:21.045 "num_blocks": 253952, 00:13:21.045 "uuid": "b7cd228d-861a-4c15-bb0c-d1e7513994fb", 00:13:21.045 "assigned_rate_limits": { 00:13:21.045 "rw_ios_per_sec": 0, 00:13:21.045 "rw_mbytes_per_sec": 0, 00:13:21.045 "r_mbytes_per_sec": 0, 00:13:21.045 "w_mbytes_per_sec": 0 00:13:21.045 }, 00:13:21.045 "claimed": false, 00:13:21.045 "zoned": false, 00:13:21.045 "supported_io_types": { 00:13:21.045 "read": true, 00:13:21.045 "write": true, 00:13:21.045 "unmap": true, 00:13:21.045 "flush": true, 00:13:21.045 "reset": true, 00:13:21.045 "nvme_admin": false, 00:13:21.045 "nvme_io": false, 00:13:21.045 "nvme_io_md": false, 00:13:21.045 "write_zeroes": true, 00:13:21.045 "zcopy": false, 00:13:21.045 "get_zone_info": false, 00:13:21.045 "zone_management": false, 00:13:21.045 "zone_append": false, 00:13:21.045 "compare": false, 00:13:21.045 "compare_and_write": false, 00:13:21.045 "abort": false, 00:13:21.045 "seek_hole": false, 00:13:21.045 "seek_data": false, 00:13:21.045 "copy": false, 00:13:21.045 "nvme_iov_md": false 00:13:21.045 }, 00:13:21.045 "memory_domains": [ 00:13:21.045 { 00:13:21.045 "dma_device_id": "system", 00:13:21.045 "dma_device_type": 1 00:13:21.045 }, 00:13:21.045 { 00:13:21.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.045 "dma_device_type": 2 00:13:21.045 }, 00:13:21.045 { 00:13:21.045 "dma_device_id": "system", 00:13:21.045 "dma_device_type": 1 00:13:21.045 }, 00:13:21.045 { 00:13:21.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.045 "dma_device_type": 2 00:13:21.045 }, 00:13:21.045 { 00:13:21.045 "dma_device_id": "system", 00:13:21.045 "dma_device_type": 1 00:13:21.045 }, 00:13:21.045 { 00:13:21.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.045 "dma_device_type": 2 00:13:21.045 }, 00:13:21.045 { 00:13:21.045 "dma_device_id": "system", 00:13:21.045 "dma_device_type": 1 00:13:21.045 }, 00:13:21.045 { 00:13:21.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.045 "dma_device_type": 2 00:13:21.045 } 00:13:21.045 ], 00:13:21.045 "driver_specific": { 00:13:21.045 "raid": { 00:13:21.045 "uuid": "b7cd228d-861a-4c15-bb0c-d1e7513994fb", 00:13:21.045 "strip_size_kb": 64, 00:13:21.045 "state": "online", 00:13:21.045 "raid_level": "concat", 00:13:21.045 "superblock": true, 00:13:21.045 "num_base_bdevs": 4, 00:13:21.045 "num_base_bdevs_discovered": 4, 00:13:21.045 "num_base_bdevs_operational": 4, 00:13:21.045 "base_bdevs_list": [ 00:13:21.045 { 00:13:21.045 "name": "NewBaseBdev", 00:13:21.045 "uuid": "0ec7e07e-f7b6-43a4-aa82-ba45eb13768a", 00:13:21.045 "is_configured": true, 00:13:21.045 "data_offset": 2048, 00:13:21.045 "data_size": 63488 00:13:21.045 }, 00:13:21.045 { 00:13:21.045 "name": "BaseBdev2", 00:13:21.045 "uuid": "719a0c53-9c83-4ef3-b318-ddc170689dbf", 00:13:21.045 "is_configured": true, 00:13:21.045 "data_offset": 2048, 00:13:21.045 "data_size": 63488 00:13:21.045 }, 00:13:21.045 { 00:13:21.045 "name": "BaseBdev3", 00:13:21.045 "uuid": "943dafba-2cd2-4dd9-9237-f8a97ab1d183", 00:13:21.045 "is_configured": true, 00:13:21.045 "data_offset": 2048, 00:13:21.045 "data_size": 63488 00:13:21.045 }, 00:13:21.045 { 00:13:21.045 "name": "BaseBdev4", 00:13:21.045 "uuid": "c1553712-1819-48a4-87c9-94d7e8465594", 00:13:21.045 "is_configured": true, 00:13:21.045 "data_offset": 2048, 00:13:21.045 "data_size": 63488 00:13:21.045 } 00:13:21.045 ] 00:13:21.045 } 00:13:21.045 } 00:13:21.045 }' 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:21.045 BaseBdev2 00:13:21.045 BaseBdev3 00:13:21.045 BaseBdev4' 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:21.045 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.303 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.304 [2024-10-17 20:10:06.908951] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.304 [2024-10-17 20:10:06.909156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:21.304 [2024-10-17 20:10:06.909266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.304 [2024-10-17 20:10:06.909352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:21.304 [2024-10-17 20:10:06.909368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71989 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 71989 ']' 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 71989 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71989 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:21.304 killing process with pid 71989 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71989' 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 71989 00:13:21.304 [2024-10-17 20:10:06.948139] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.304 20:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 71989 00:13:21.870 [2024-10-17 20:10:07.300197] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.805 20:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:22.805 00:13:22.805 real 0m12.939s 00:13:22.805 user 0m21.521s 00:13:22.805 sys 0m1.844s 00:13:22.805 20:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.805 ************************************ 00:13:22.805 END TEST raid_state_function_test_sb 00:13:22.805 ************************************ 00:13:22.805 20:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.805 20:10:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:13:22.805 20:10:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:22.805 20:10:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.805 20:10:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.805 ************************************ 00:13:22.805 START TEST raid_superblock_test 00:13:22.805 ************************************ 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72670 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72670 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72670 ']' 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:22.805 20:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.064 [2024-10-17 20:10:08.494174] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:13:23.064 [2024-10-17 20:10:08.494630] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72670 ] 00:13:23.064 [2024-10-17 20:10:08.662454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.322 [2024-10-17 20:10:08.791635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.580 [2024-10-17 20:10:08.994011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.580 [2024-10-17 20:10:08.994094] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.838 malloc1 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.838 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.097 [2024-10-17 20:10:09.492474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:24.097 [2024-10-17 20:10:09.492736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.097 [2024-10-17 20:10:09.492815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:24.097 [2024-10-17 20:10:09.493111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.097 [2024-10-17 20:10:09.496220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.097 [2024-10-17 20:10:09.496396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:24.097 pt1 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.097 malloc2 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.097 [2024-10-17 20:10:09.549502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:24.097 [2024-10-17 20:10:09.549577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.097 [2024-10-17 20:10:09.549609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:24.097 [2024-10-17 20:10:09.549624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.097 [2024-10-17 20:10:09.552390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.097 [2024-10-17 20:10:09.552443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:24.097 pt2 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:24.097 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.098 malloc3 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.098 [2024-10-17 20:10:09.618877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:24.098 [2024-10-17 20:10:09.618961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.098 [2024-10-17 20:10:09.618994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:24.098 [2024-10-17 20:10:09.619027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.098 [2024-10-17 20:10:09.621810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.098 [2024-10-17 20:10:09.622031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:24.098 pt3 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.098 malloc4 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.098 [2024-10-17 20:10:09.675563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:24.098 [2024-10-17 20:10:09.675777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.098 [2024-10-17 20:10:09.675821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:24.098 [2024-10-17 20:10:09.675837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.098 [2024-10-17 20:10:09.678696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.098 [2024-10-17 20:10:09.678743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:24.098 pt4 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.098 [2024-10-17 20:10:09.687661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:24.098 [2024-10-17 20:10:09.690342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:24.098 [2024-10-17 20:10:09.690448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:24.098 [2024-10-17 20:10:09.690538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:24.098 [2024-10-17 20:10:09.690788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:24.098 [2024-10-17 20:10:09.690805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:24.098 [2024-10-17 20:10:09.691303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:24.098 [2024-10-17 20:10:09.691567] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:24.098 [2024-10-17 20:10:09.691697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:24.098 [2024-10-17 20:10:09.692071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.098 "name": "raid_bdev1", 00:13:24.098 "uuid": "9868db70-94e0-47e8-9a73-a16be54e3605", 00:13:24.098 "strip_size_kb": 64, 00:13:24.098 "state": "online", 00:13:24.098 "raid_level": "concat", 00:13:24.098 "superblock": true, 00:13:24.098 "num_base_bdevs": 4, 00:13:24.098 "num_base_bdevs_discovered": 4, 00:13:24.098 "num_base_bdevs_operational": 4, 00:13:24.098 "base_bdevs_list": [ 00:13:24.098 { 00:13:24.098 "name": "pt1", 00:13:24.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.098 "is_configured": true, 00:13:24.098 "data_offset": 2048, 00:13:24.098 "data_size": 63488 00:13:24.098 }, 00:13:24.098 { 00:13:24.098 "name": "pt2", 00:13:24.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.098 "is_configured": true, 00:13:24.098 "data_offset": 2048, 00:13:24.098 "data_size": 63488 00:13:24.098 }, 00:13:24.098 { 00:13:24.098 "name": "pt3", 00:13:24.098 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:24.098 "is_configured": true, 00:13:24.098 "data_offset": 2048, 00:13:24.098 "data_size": 63488 00:13:24.098 }, 00:13:24.098 { 00:13:24.098 "name": "pt4", 00:13:24.098 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:24.098 "is_configured": true, 00:13:24.098 "data_offset": 2048, 00:13:24.098 "data_size": 63488 00:13:24.098 } 00:13:24.098 ] 00:13:24.098 }' 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.098 20:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.664 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:24.664 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:24.664 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.665 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.665 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.665 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.665 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:24.665 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.665 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.665 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.665 [2024-10-17 20:10:10.228616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.665 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.665 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.665 "name": "raid_bdev1", 00:13:24.665 "aliases": [ 00:13:24.665 "9868db70-94e0-47e8-9a73-a16be54e3605" 00:13:24.665 ], 00:13:24.665 "product_name": "Raid Volume", 00:13:24.665 "block_size": 512, 00:13:24.665 "num_blocks": 253952, 00:13:24.665 "uuid": "9868db70-94e0-47e8-9a73-a16be54e3605", 00:13:24.665 "assigned_rate_limits": { 00:13:24.665 "rw_ios_per_sec": 0, 00:13:24.665 "rw_mbytes_per_sec": 0, 00:13:24.665 "r_mbytes_per_sec": 0, 00:13:24.665 "w_mbytes_per_sec": 0 00:13:24.665 }, 00:13:24.665 "claimed": false, 00:13:24.665 "zoned": false, 00:13:24.665 "supported_io_types": { 00:13:24.665 "read": true, 00:13:24.665 "write": true, 00:13:24.665 "unmap": true, 00:13:24.665 "flush": true, 00:13:24.665 "reset": true, 00:13:24.665 "nvme_admin": false, 00:13:24.665 "nvme_io": false, 00:13:24.665 "nvme_io_md": false, 00:13:24.665 "write_zeroes": true, 00:13:24.665 "zcopy": false, 00:13:24.665 "get_zone_info": false, 00:13:24.665 "zone_management": false, 00:13:24.665 "zone_append": false, 00:13:24.665 "compare": false, 00:13:24.665 "compare_and_write": false, 00:13:24.665 "abort": false, 00:13:24.665 "seek_hole": false, 00:13:24.665 "seek_data": false, 00:13:24.665 "copy": false, 00:13:24.665 "nvme_iov_md": false 00:13:24.665 }, 00:13:24.665 "memory_domains": [ 00:13:24.665 { 00:13:24.665 "dma_device_id": "system", 00:13:24.665 "dma_device_type": 1 00:13:24.665 }, 00:13:24.665 { 00:13:24.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.665 "dma_device_type": 2 00:13:24.665 }, 00:13:24.665 { 00:13:24.665 "dma_device_id": "system", 00:13:24.665 "dma_device_type": 1 00:13:24.665 }, 00:13:24.665 { 00:13:24.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.665 "dma_device_type": 2 00:13:24.665 }, 00:13:24.665 { 00:13:24.665 "dma_device_id": "system", 00:13:24.665 "dma_device_type": 1 00:13:24.665 }, 00:13:24.665 { 00:13:24.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.665 "dma_device_type": 2 00:13:24.665 }, 00:13:24.665 { 00:13:24.665 "dma_device_id": "system", 00:13:24.665 "dma_device_type": 1 00:13:24.665 }, 00:13:24.665 { 00:13:24.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.665 "dma_device_type": 2 00:13:24.665 } 00:13:24.665 ], 00:13:24.665 "driver_specific": { 00:13:24.665 "raid": { 00:13:24.665 "uuid": "9868db70-94e0-47e8-9a73-a16be54e3605", 00:13:24.665 "strip_size_kb": 64, 00:13:24.665 "state": "online", 00:13:24.665 "raid_level": "concat", 00:13:24.665 "superblock": true, 00:13:24.665 "num_base_bdevs": 4, 00:13:24.665 "num_base_bdevs_discovered": 4, 00:13:24.665 "num_base_bdevs_operational": 4, 00:13:24.665 "base_bdevs_list": [ 00:13:24.665 { 00:13:24.665 "name": "pt1", 00:13:24.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.665 "is_configured": true, 00:13:24.665 "data_offset": 2048, 00:13:24.665 "data_size": 63488 00:13:24.665 }, 00:13:24.665 { 00:13:24.665 "name": "pt2", 00:13:24.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.665 "is_configured": true, 00:13:24.665 "data_offset": 2048, 00:13:24.665 "data_size": 63488 00:13:24.665 }, 00:13:24.665 { 00:13:24.665 "name": "pt3", 00:13:24.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:24.665 "is_configured": true, 00:13:24.665 "data_offset": 2048, 00:13:24.665 "data_size": 63488 00:13:24.665 }, 00:13:24.665 { 00:13:24.665 "name": "pt4", 00:13:24.665 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:24.665 "is_configured": true, 00:13:24.665 "data_offset": 2048, 00:13:24.665 "data_size": 63488 00:13:24.665 } 00:13:24.665 ] 00:13:24.665 } 00:13:24.665 } 00:13:24.665 }' 00:13:24.665 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:24.924 pt2 00:13:24.924 pt3 00:13:24.924 pt4' 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.924 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:25.183 [2024-10-17 20:10:10.604661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9868db70-94e0-47e8-9a73-a16be54e3605 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9868db70-94e0-47e8-9a73-a16be54e3605 ']' 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 [2024-10-17 20:10:10.656307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.183 [2024-10-17 20:10:10.656459] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.183 [2024-10-17 20:10:10.656655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.183 [2024-10-17 20:10:10.656843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.183 [2024-10-17 20:10:10.656986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 [2024-10-17 20:10:10.808352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:25.183 [2024-10-17 20:10:10.810790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:25.183 [2024-10-17 20:10:10.810856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:25.183 [2024-10-17 20:10:10.810908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:25.183 [2024-10-17 20:10:10.810976] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:25.183 [2024-10-17 20:10:10.811067] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:25.183 [2024-10-17 20:10:10.811101] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:25.183 [2024-10-17 20:10:10.811130] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:25.183 [2024-10-17 20:10:10.811151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.183 [2024-10-17 20:10:10.811169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:25.183 request: 00:13:25.183 { 00:13:25.183 "name": "raid_bdev1", 00:13:25.183 "raid_level": "concat", 00:13:25.183 "base_bdevs": [ 00:13:25.183 "malloc1", 00:13:25.183 "malloc2", 00:13:25.183 "malloc3", 00:13:25.183 "malloc4" 00:13:25.183 ], 00:13:25.183 "strip_size_kb": 64, 00:13:25.183 "superblock": false, 00:13:25.183 "method": "bdev_raid_create", 00:13:25.183 "req_id": 1 00:13:25.183 } 00:13:25.183 Got JSON-RPC error response 00:13:25.183 response: 00:13:25.183 { 00:13:25.183 "code": -17, 00:13:25.183 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:25.183 } 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.183 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:25.184 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.184 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.184 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.443 [2024-10-17 20:10:10.872336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:25.443 [2024-10-17 20:10:10.872603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.443 [2024-10-17 20:10:10.872673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:25.443 [2024-10-17 20:10:10.872788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.443 [2024-10-17 20:10:10.875759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.443 [2024-10-17 20:10:10.875967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:25.443 [2024-10-17 20:10:10.876118] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:25.443 [2024-10-17 20:10:10.876201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:25.443 pt1 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.443 "name": "raid_bdev1", 00:13:25.443 "uuid": "9868db70-94e0-47e8-9a73-a16be54e3605", 00:13:25.443 "strip_size_kb": 64, 00:13:25.443 "state": "configuring", 00:13:25.443 "raid_level": "concat", 00:13:25.443 "superblock": true, 00:13:25.443 "num_base_bdevs": 4, 00:13:25.443 "num_base_bdevs_discovered": 1, 00:13:25.443 "num_base_bdevs_operational": 4, 00:13:25.443 "base_bdevs_list": [ 00:13:25.443 { 00:13:25.443 "name": "pt1", 00:13:25.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:25.443 "is_configured": true, 00:13:25.443 "data_offset": 2048, 00:13:25.443 "data_size": 63488 00:13:25.443 }, 00:13:25.443 { 00:13:25.443 "name": null, 00:13:25.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.443 "is_configured": false, 00:13:25.443 "data_offset": 2048, 00:13:25.443 "data_size": 63488 00:13:25.443 }, 00:13:25.443 { 00:13:25.443 "name": null, 00:13:25.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.443 "is_configured": false, 00:13:25.443 "data_offset": 2048, 00:13:25.443 "data_size": 63488 00:13:25.443 }, 00:13:25.443 { 00:13:25.443 "name": null, 00:13:25.443 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:25.443 "is_configured": false, 00:13:25.443 "data_offset": 2048, 00:13:25.443 "data_size": 63488 00:13:25.443 } 00:13:25.443 ] 00:13:25.443 }' 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.443 20:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.009 [2024-10-17 20:10:11.384571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:26.009 [2024-10-17 20:10:11.384662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.009 [2024-10-17 20:10:11.384690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:26.009 [2024-10-17 20:10:11.384708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.009 [2024-10-17 20:10:11.385297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.009 [2024-10-17 20:10:11.385344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:26.009 [2024-10-17 20:10:11.385444] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:26.009 [2024-10-17 20:10:11.385479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:26.009 pt2 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.009 [2024-10-17 20:10:11.392569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.009 "name": "raid_bdev1", 00:13:26.009 "uuid": "9868db70-94e0-47e8-9a73-a16be54e3605", 00:13:26.009 "strip_size_kb": 64, 00:13:26.009 "state": "configuring", 00:13:26.009 "raid_level": "concat", 00:13:26.009 "superblock": true, 00:13:26.009 "num_base_bdevs": 4, 00:13:26.009 "num_base_bdevs_discovered": 1, 00:13:26.009 "num_base_bdevs_operational": 4, 00:13:26.009 "base_bdevs_list": [ 00:13:26.009 { 00:13:26.009 "name": "pt1", 00:13:26.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.009 "is_configured": true, 00:13:26.009 "data_offset": 2048, 00:13:26.009 "data_size": 63488 00:13:26.009 }, 00:13:26.009 { 00:13:26.009 "name": null, 00:13:26.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.009 "is_configured": false, 00:13:26.009 "data_offset": 0, 00:13:26.009 "data_size": 63488 00:13:26.009 }, 00:13:26.009 { 00:13:26.009 "name": null, 00:13:26.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.009 "is_configured": false, 00:13:26.009 "data_offset": 2048, 00:13:26.009 "data_size": 63488 00:13:26.009 }, 00:13:26.009 { 00:13:26.009 "name": null, 00:13:26.009 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:26.009 "is_configured": false, 00:13:26.009 "data_offset": 2048, 00:13:26.009 "data_size": 63488 00:13:26.009 } 00:13:26.009 ] 00:13:26.009 }' 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.009 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.575 [2024-10-17 20:10:11.952722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:26.575 [2024-10-17 20:10:11.952817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.575 [2024-10-17 20:10:11.952848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:26.575 [2024-10-17 20:10:11.952863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.575 [2024-10-17 20:10:11.953452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.575 [2024-10-17 20:10:11.953485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:26.575 [2024-10-17 20:10:11.953591] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:26.575 [2024-10-17 20:10:11.953622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:26.575 pt2 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.575 [2024-10-17 20:10:11.964686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:26.575 [2024-10-17 20:10:11.964925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.575 [2024-10-17 20:10:11.964971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:26.575 [2024-10-17 20:10:11.964988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.575 [2024-10-17 20:10:11.965439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.575 [2024-10-17 20:10:11.965473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:26.575 [2024-10-17 20:10:11.965552] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:26.575 [2024-10-17 20:10:11.965579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:26.575 pt3 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.575 [2024-10-17 20:10:11.972675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:26.575 [2024-10-17 20:10:11.972762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.575 [2024-10-17 20:10:11.972789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:26.575 [2024-10-17 20:10:11.972801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.575 [2024-10-17 20:10:11.973268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.575 [2024-10-17 20:10:11.973299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:26.575 [2024-10-17 20:10:11.973383] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:26.575 [2024-10-17 20:10:11.973459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:26.575 [2024-10-17 20:10:11.973640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:26.575 [2024-10-17 20:10:11.973656] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:26.575 [2024-10-17 20:10:11.973981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:26.575 [2024-10-17 20:10:11.974217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:26.575 [2024-10-17 20:10:11.974239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:26.575 [2024-10-17 20:10:11.974416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.575 pt4 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.575 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.576 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.576 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.576 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.576 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.576 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.576 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.576 20:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.576 20:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.576 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.576 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.576 "name": "raid_bdev1", 00:13:26.576 "uuid": "9868db70-94e0-47e8-9a73-a16be54e3605", 00:13:26.576 "strip_size_kb": 64, 00:13:26.576 "state": "online", 00:13:26.576 "raid_level": "concat", 00:13:26.576 "superblock": true, 00:13:26.576 "num_base_bdevs": 4, 00:13:26.576 "num_base_bdevs_discovered": 4, 00:13:26.576 "num_base_bdevs_operational": 4, 00:13:26.576 "base_bdevs_list": [ 00:13:26.576 { 00:13:26.576 "name": "pt1", 00:13:26.576 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.576 "is_configured": true, 00:13:26.576 "data_offset": 2048, 00:13:26.576 "data_size": 63488 00:13:26.576 }, 00:13:26.576 { 00:13:26.576 "name": "pt2", 00:13:26.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.576 "is_configured": true, 00:13:26.576 "data_offset": 2048, 00:13:26.576 "data_size": 63488 00:13:26.576 }, 00:13:26.576 { 00:13:26.576 "name": "pt3", 00:13:26.576 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.576 "is_configured": true, 00:13:26.576 "data_offset": 2048, 00:13:26.576 "data_size": 63488 00:13:26.576 }, 00:13:26.576 { 00:13:26.576 "name": "pt4", 00:13:26.576 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:26.576 "is_configured": true, 00:13:26.576 "data_offset": 2048, 00:13:26.576 "data_size": 63488 00:13:26.576 } 00:13:26.576 ] 00:13:26.576 }' 00:13:26.576 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.576 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.833 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:26.834 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:26.834 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:26.834 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:26.834 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:26.834 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.092 [2024-10-17 20:10:12.497326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:27.092 "name": "raid_bdev1", 00:13:27.092 "aliases": [ 00:13:27.092 "9868db70-94e0-47e8-9a73-a16be54e3605" 00:13:27.092 ], 00:13:27.092 "product_name": "Raid Volume", 00:13:27.092 "block_size": 512, 00:13:27.092 "num_blocks": 253952, 00:13:27.092 "uuid": "9868db70-94e0-47e8-9a73-a16be54e3605", 00:13:27.092 "assigned_rate_limits": { 00:13:27.092 "rw_ios_per_sec": 0, 00:13:27.092 "rw_mbytes_per_sec": 0, 00:13:27.092 "r_mbytes_per_sec": 0, 00:13:27.092 "w_mbytes_per_sec": 0 00:13:27.092 }, 00:13:27.092 "claimed": false, 00:13:27.092 "zoned": false, 00:13:27.092 "supported_io_types": { 00:13:27.092 "read": true, 00:13:27.092 "write": true, 00:13:27.092 "unmap": true, 00:13:27.092 "flush": true, 00:13:27.092 "reset": true, 00:13:27.092 "nvme_admin": false, 00:13:27.092 "nvme_io": false, 00:13:27.092 "nvme_io_md": false, 00:13:27.092 "write_zeroes": true, 00:13:27.092 "zcopy": false, 00:13:27.092 "get_zone_info": false, 00:13:27.092 "zone_management": false, 00:13:27.092 "zone_append": false, 00:13:27.092 "compare": false, 00:13:27.092 "compare_and_write": false, 00:13:27.092 "abort": false, 00:13:27.092 "seek_hole": false, 00:13:27.092 "seek_data": false, 00:13:27.092 "copy": false, 00:13:27.092 "nvme_iov_md": false 00:13:27.092 }, 00:13:27.092 "memory_domains": [ 00:13:27.092 { 00:13:27.092 "dma_device_id": "system", 00:13:27.092 "dma_device_type": 1 00:13:27.092 }, 00:13:27.092 { 00:13:27.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.092 "dma_device_type": 2 00:13:27.092 }, 00:13:27.092 { 00:13:27.092 "dma_device_id": "system", 00:13:27.092 "dma_device_type": 1 00:13:27.092 }, 00:13:27.092 { 00:13:27.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.092 "dma_device_type": 2 00:13:27.092 }, 00:13:27.092 { 00:13:27.092 "dma_device_id": "system", 00:13:27.092 "dma_device_type": 1 00:13:27.092 }, 00:13:27.092 { 00:13:27.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.092 "dma_device_type": 2 00:13:27.092 }, 00:13:27.092 { 00:13:27.092 "dma_device_id": "system", 00:13:27.092 "dma_device_type": 1 00:13:27.092 }, 00:13:27.092 { 00:13:27.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.092 "dma_device_type": 2 00:13:27.092 } 00:13:27.092 ], 00:13:27.092 "driver_specific": { 00:13:27.092 "raid": { 00:13:27.092 "uuid": "9868db70-94e0-47e8-9a73-a16be54e3605", 00:13:27.092 "strip_size_kb": 64, 00:13:27.092 "state": "online", 00:13:27.092 "raid_level": "concat", 00:13:27.092 "superblock": true, 00:13:27.092 "num_base_bdevs": 4, 00:13:27.092 "num_base_bdevs_discovered": 4, 00:13:27.092 "num_base_bdevs_operational": 4, 00:13:27.092 "base_bdevs_list": [ 00:13:27.092 { 00:13:27.092 "name": "pt1", 00:13:27.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:27.092 "is_configured": true, 00:13:27.092 "data_offset": 2048, 00:13:27.092 "data_size": 63488 00:13:27.092 }, 00:13:27.092 { 00:13:27.092 "name": "pt2", 00:13:27.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:27.092 "is_configured": true, 00:13:27.092 "data_offset": 2048, 00:13:27.092 "data_size": 63488 00:13:27.092 }, 00:13:27.092 { 00:13:27.092 "name": "pt3", 00:13:27.092 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:27.092 "is_configured": true, 00:13:27.092 "data_offset": 2048, 00:13:27.092 "data_size": 63488 00:13:27.092 }, 00:13:27.092 { 00:13:27.092 "name": "pt4", 00:13:27.092 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:27.092 "is_configured": true, 00:13:27.092 "data_offset": 2048, 00:13:27.092 "data_size": 63488 00:13:27.092 } 00:13:27.092 ] 00:13:27.092 } 00:13:27.092 } 00:13:27.092 }' 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:27.092 pt2 00:13:27.092 pt3 00:13:27.092 pt4' 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.092 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.350 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:27.351 [2024-10-17 20:10:12.869322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9868db70-94e0-47e8-9a73-a16be54e3605 '!=' 9868db70-94e0-47e8-9a73-a16be54e3605 ']' 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72670 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72670 ']' 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72670 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72670 00:13:27.351 killing process with pid 72670 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72670' 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72670 00:13:27.351 [2024-10-17 20:10:12.948027] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.351 [2024-10-17 20:10:12.948146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.351 20:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72670 00:13:27.351 [2024-10-17 20:10:12.948240] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.351 [2024-10-17 20:10:12.948256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:27.917 [2024-10-17 20:10:13.289424] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.851 ************************************ 00:13:28.851 END TEST raid_superblock_test 00:13:28.851 ************************************ 00:13:28.851 20:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:28.851 00:13:28.851 real 0m5.908s 00:13:28.851 user 0m8.919s 00:13:28.851 sys 0m0.862s 00:13:28.851 20:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.851 20:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.851 20:10:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:28.851 20:10:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:28.851 20:10:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.851 20:10:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.851 ************************************ 00:13:28.851 START TEST raid_read_error_test 00:13:28.851 ************************************ 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:28.851 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:28.852 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wFNXtSza7c 00:13:28.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.852 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72935 00:13:28.852 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72935 00:13:28.852 20:10:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72935 ']' 00:13:28.852 20:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:28.852 20:10:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.852 20:10:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:28.852 20:10:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.852 20:10:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:28.852 20:10:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.852 [2024-10-17 20:10:14.474481] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:13:28.852 [2024-10-17 20:10:14.474970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72935 ] 00:13:29.110 [2024-10-17 20:10:14.651532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.368 [2024-10-17 20:10:14.783061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.368 [2024-10-17 20:10:14.987828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.368 [2024-10-17 20:10:14.987871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.933 BaseBdev1_malloc 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.933 true 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.933 [2024-10-17 20:10:15.559318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:29.933 [2024-10-17 20:10:15.559416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.933 [2024-10-17 20:10:15.559460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:29.933 [2024-10-17 20:10:15.559481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.933 [2024-10-17 20:10:15.562456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.933 [2024-10-17 20:10:15.562520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.933 BaseBdev1 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.933 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.192 BaseBdev2_malloc 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.192 true 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.192 [2024-10-17 20:10:15.618889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:30.192 [2024-10-17 20:10:15.618978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.192 [2024-10-17 20:10:15.619019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:30.192 [2024-10-17 20:10:15.619040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.192 [2024-10-17 20:10:15.621904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.192 [2024-10-17 20:10:15.621965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:30.192 BaseBdev2 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.192 BaseBdev3_malloc 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.192 true 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.192 [2024-10-17 20:10:15.689089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:30.192 [2024-10-17 20:10:15.689177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.192 [2024-10-17 20:10:15.689206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:30.192 [2024-10-17 20:10:15.689224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.192 [2024-10-17 20:10:15.692160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.192 [2024-10-17 20:10:15.692209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:30.192 BaseBdev3 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.192 BaseBdev4_malloc 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.192 true 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.192 [2024-10-17 20:10:15.749835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:30.192 [2024-10-17 20:10:15.750094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.192 [2024-10-17 20:10:15.750133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:30.192 [2024-10-17 20:10:15.750152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.192 [2024-10-17 20:10:15.753056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.192 [2024-10-17 20:10:15.753240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:30.192 BaseBdev4 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.192 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.192 [2024-10-17 20:10:15.761907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.192 [2024-10-17 20:10:15.764476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.192 [2024-10-17 20:10:15.764770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.192 [2024-10-17 20:10:15.764878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:30.193 [2024-10-17 20:10:15.765210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:30.193 [2024-10-17 20:10:15.765235] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:30.193 [2024-10-17 20:10:15.765589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:30.193 [2024-10-17 20:10:15.765800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:30.193 [2024-10-17 20:10:15.765819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:30.193 [2024-10-17 20:10:15.766076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.193 "name": "raid_bdev1", 00:13:30.193 "uuid": "4cefd006-c5ad-42ed-bce4-49be4795a89d", 00:13:30.193 "strip_size_kb": 64, 00:13:30.193 "state": "online", 00:13:30.193 "raid_level": "concat", 00:13:30.193 "superblock": true, 00:13:30.193 "num_base_bdevs": 4, 00:13:30.193 "num_base_bdevs_discovered": 4, 00:13:30.193 "num_base_bdevs_operational": 4, 00:13:30.193 "base_bdevs_list": [ 00:13:30.193 { 00:13:30.193 "name": "BaseBdev1", 00:13:30.193 "uuid": "48478c8e-adc7-5370-acf2-5f040816f336", 00:13:30.193 "is_configured": true, 00:13:30.193 "data_offset": 2048, 00:13:30.193 "data_size": 63488 00:13:30.193 }, 00:13:30.193 { 00:13:30.193 "name": "BaseBdev2", 00:13:30.193 "uuid": "3eafd2bd-ca02-56b5-a427-005ae04fbb78", 00:13:30.193 "is_configured": true, 00:13:30.193 "data_offset": 2048, 00:13:30.193 "data_size": 63488 00:13:30.193 }, 00:13:30.193 { 00:13:30.193 "name": "BaseBdev3", 00:13:30.193 "uuid": "e8905028-f5dc-5e32-b891-9766c420d9b7", 00:13:30.193 "is_configured": true, 00:13:30.193 "data_offset": 2048, 00:13:30.193 "data_size": 63488 00:13:30.193 }, 00:13:30.193 { 00:13:30.193 "name": "BaseBdev4", 00:13:30.193 "uuid": "15317953-8c56-540a-a6d1-0d733346574c", 00:13:30.193 "is_configured": true, 00:13:30.193 "data_offset": 2048, 00:13:30.193 "data_size": 63488 00:13:30.193 } 00:13:30.193 ] 00:13:30.193 }' 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.193 20:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.758 20:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:30.758 20:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:30.758 [2024-10-17 20:10:16.395591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.693 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.693 "name": "raid_bdev1", 00:13:31.693 "uuid": "4cefd006-c5ad-42ed-bce4-49be4795a89d", 00:13:31.693 "strip_size_kb": 64, 00:13:31.693 "state": "online", 00:13:31.693 "raid_level": "concat", 00:13:31.693 "superblock": true, 00:13:31.693 "num_base_bdevs": 4, 00:13:31.693 "num_base_bdevs_discovered": 4, 00:13:31.693 "num_base_bdevs_operational": 4, 00:13:31.693 "base_bdevs_list": [ 00:13:31.693 { 00:13:31.693 "name": "BaseBdev1", 00:13:31.693 "uuid": "48478c8e-adc7-5370-acf2-5f040816f336", 00:13:31.693 "is_configured": true, 00:13:31.693 "data_offset": 2048, 00:13:31.693 "data_size": 63488 00:13:31.693 }, 00:13:31.693 { 00:13:31.693 "name": "BaseBdev2", 00:13:31.693 "uuid": "3eafd2bd-ca02-56b5-a427-005ae04fbb78", 00:13:31.693 "is_configured": true, 00:13:31.693 "data_offset": 2048, 00:13:31.693 "data_size": 63488 00:13:31.693 }, 00:13:31.693 { 00:13:31.693 "name": "BaseBdev3", 00:13:31.693 "uuid": "e8905028-f5dc-5e32-b891-9766c420d9b7", 00:13:31.693 "is_configured": true, 00:13:31.693 "data_offset": 2048, 00:13:31.693 "data_size": 63488 00:13:31.693 }, 00:13:31.693 { 00:13:31.693 "name": "BaseBdev4", 00:13:31.693 "uuid": "15317953-8c56-540a-a6d1-0d733346574c", 00:13:31.693 "is_configured": true, 00:13:31.693 "data_offset": 2048, 00:13:31.693 "data_size": 63488 00:13:31.693 } 00:13:31.693 ] 00:13:31.693 }' 00:13:31.952 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.952 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.211 [2024-10-17 20:10:17.827075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:32.211 [2024-10-17 20:10:17.827128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:32.211 [2024-10-17 20:10:17.830547] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.211 [2024-10-17 20:10:17.830623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.211 [2024-10-17 20:10:17.830683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.211 [2024-10-17 20:10:17.830702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:32.211 { 00:13:32.211 "results": [ 00:13:32.211 { 00:13:32.211 "job": "raid_bdev1", 00:13:32.211 "core_mask": "0x1", 00:13:32.211 "workload": "randrw", 00:13:32.211 "percentage": 50, 00:13:32.211 "status": "finished", 00:13:32.211 "queue_depth": 1, 00:13:32.211 "io_size": 131072, 00:13:32.211 "runtime": 1.428902, 00:13:32.211 "iops": 10869.884708678412, 00:13:32.211 "mibps": 1358.7355885848015, 00:13:32.211 "io_failed": 1, 00:13:32.211 "io_timeout": 0, 00:13:32.211 "avg_latency_us": 128.6837309423339, 00:13:32.211 "min_latency_us": 36.77090909090909, 00:13:32.211 "max_latency_us": 2189.498181818182 00:13:32.211 } 00:13:32.211 ], 00:13:32.211 "core_count": 1 00:13:32.211 } 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72935 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72935 ']' 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72935 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72935 00:13:32.211 killing process with pid 72935 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72935' 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72935 00:13:32.211 [2024-10-17 20:10:17.861891] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.211 20:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72935 00:13:32.775 [2024-10-17 20:10:18.160208] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.708 20:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wFNXtSza7c 00:13:33.708 20:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:33.708 20:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:33.708 20:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:33.708 20:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:33.708 20:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:33.708 20:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:33.708 20:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:33.708 00:13:33.708 real 0m4.958s 00:13:33.708 user 0m6.106s 00:13:33.708 sys 0m0.636s 00:13:33.708 20:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.708 20:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.708 ************************************ 00:13:33.708 END TEST raid_read_error_test 00:13:33.708 ************************************ 00:13:33.708 20:10:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:33.708 20:10:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:33.708 20:10:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.708 20:10:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.966 ************************************ 00:13:33.966 START TEST raid_write_error_test 00:13:33.966 ************************************ 00:13:33.966 20:10:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:13:33.966 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:33.966 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:33.966 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:33.966 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:33.966 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.966 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:33.966 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XGIg9DN8fU 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73085 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73085 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73085 ']' 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:33.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:33.967 20:10:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.967 [2024-10-17 20:10:19.491712] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:13:33.967 [2024-10-17 20:10:19.491877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73085 ] 00:13:34.225 [2024-10-17 20:10:19.668945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.225 [2024-10-17 20:10:19.805123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.483 [2024-10-17 20:10:20.020948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.483 [2024-10-17 20:10:20.021078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.050 BaseBdev1_malloc 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.050 true 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.050 [2024-10-17 20:10:20.557818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:35.050 [2024-10-17 20:10:20.557888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.050 [2024-10-17 20:10:20.557918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:35.050 [2024-10-17 20:10:20.557942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.050 [2024-10-17 20:10:20.560863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.050 [2024-10-17 20:10:20.561094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.050 BaseBdev1 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.050 BaseBdev2_malloc 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.050 true 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.050 [2024-10-17 20:10:20.613983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:35.050 [2024-10-17 20:10:20.614061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.050 [2024-10-17 20:10:20.614087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:35.050 [2024-10-17 20:10:20.614103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.050 [2024-10-17 20:10:20.616912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.050 [2024-10-17 20:10:20.617247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:35.050 BaseBdev2 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.050 BaseBdev3_malloc 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.050 true 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.050 [2024-10-17 20:10:20.683295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:35.050 [2024-10-17 20:10:20.683386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.050 [2024-10-17 20:10:20.683411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:35.050 [2024-10-17 20:10:20.683427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.050 [2024-10-17 20:10:20.686279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.050 [2024-10-17 20:10:20.686327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:35.050 BaseBdev3 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.050 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.309 BaseBdev4_malloc 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.309 true 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.309 [2024-10-17 20:10:20.737865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:35.309 [2024-10-17 20:10:20.737926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.309 [2024-10-17 20:10:20.737950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:35.309 [2024-10-17 20:10:20.737967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.309 [2024-10-17 20:10:20.740889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.309 [2024-10-17 20:10:20.741105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:35.309 BaseBdev4 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.309 [2024-10-17 20:10:20.746071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.309 [2024-10-17 20:10:20.748706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.309 [2024-10-17 20:10:20.748826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.309 [2024-10-17 20:10:20.748918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:35.309 [2024-10-17 20:10:20.749246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:35.309 [2024-10-17 20:10:20.749271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:35.309 [2024-10-17 20:10:20.749619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:35.309 [2024-10-17 20:10:20.749844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:35.309 [2024-10-17 20:10:20.749862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:35.309 [2024-10-17 20:10:20.750134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.309 "name": "raid_bdev1", 00:13:35.309 "uuid": "3062cad7-72fb-4bf3-9d12-f8d155176db0", 00:13:35.309 "strip_size_kb": 64, 00:13:35.309 "state": "online", 00:13:35.309 "raid_level": "concat", 00:13:35.309 "superblock": true, 00:13:35.309 "num_base_bdevs": 4, 00:13:35.309 "num_base_bdevs_discovered": 4, 00:13:35.309 "num_base_bdevs_operational": 4, 00:13:35.309 "base_bdevs_list": [ 00:13:35.309 { 00:13:35.309 "name": "BaseBdev1", 00:13:35.309 "uuid": "7697da71-a890-5b6a-b2ed-443fd10dc665", 00:13:35.309 "is_configured": true, 00:13:35.309 "data_offset": 2048, 00:13:35.309 "data_size": 63488 00:13:35.309 }, 00:13:35.309 { 00:13:35.309 "name": "BaseBdev2", 00:13:35.309 "uuid": "74ff8b2b-23ee-507b-86a1-01f684582f06", 00:13:35.309 "is_configured": true, 00:13:35.309 "data_offset": 2048, 00:13:35.309 "data_size": 63488 00:13:35.309 }, 00:13:35.309 { 00:13:35.309 "name": "BaseBdev3", 00:13:35.309 "uuid": "740ca759-7faf-516b-8629-a4399aa9b81b", 00:13:35.309 "is_configured": true, 00:13:35.309 "data_offset": 2048, 00:13:35.309 "data_size": 63488 00:13:35.309 }, 00:13:35.309 { 00:13:35.309 "name": "BaseBdev4", 00:13:35.309 "uuid": "21355e11-8d1b-52ef-8ed6-cd1ebb0d3e49", 00:13:35.309 "is_configured": true, 00:13:35.309 "data_offset": 2048, 00:13:35.309 "data_size": 63488 00:13:35.309 } 00:13:35.309 ] 00:13:35.309 }' 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.309 20:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.876 20:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:35.876 20:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:35.876 [2024-10-17 20:10:21.391790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.811 "name": "raid_bdev1", 00:13:36.811 "uuid": "3062cad7-72fb-4bf3-9d12-f8d155176db0", 00:13:36.811 "strip_size_kb": 64, 00:13:36.811 "state": "online", 00:13:36.811 "raid_level": "concat", 00:13:36.811 "superblock": true, 00:13:36.811 "num_base_bdevs": 4, 00:13:36.811 "num_base_bdevs_discovered": 4, 00:13:36.811 "num_base_bdevs_operational": 4, 00:13:36.811 "base_bdevs_list": [ 00:13:36.811 { 00:13:36.811 "name": "BaseBdev1", 00:13:36.811 "uuid": "7697da71-a890-5b6a-b2ed-443fd10dc665", 00:13:36.811 "is_configured": true, 00:13:36.811 "data_offset": 2048, 00:13:36.811 "data_size": 63488 00:13:36.811 }, 00:13:36.811 { 00:13:36.811 "name": "BaseBdev2", 00:13:36.811 "uuid": "74ff8b2b-23ee-507b-86a1-01f684582f06", 00:13:36.811 "is_configured": true, 00:13:36.811 "data_offset": 2048, 00:13:36.811 "data_size": 63488 00:13:36.811 }, 00:13:36.811 { 00:13:36.811 "name": "BaseBdev3", 00:13:36.811 "uuid": "740ca759-7faf-516b-8629-a4399aa9b81b", 00:13:36.811 "is_configured": true, 00:13:36.811 "data_offset": 2048, 00:13:36.811 "data_size": 63488 00:13:36.811 }, 00:13:36.811 { 00:13:36.811 "name": "BaseBdev4", 00:13:36.811 "uuid": "21355e11-8d1b-52ef-8ed6-cd1ebb0d3e49", 00:13:36.811 "is_configured": true, 00:13:36.811 "data_offset": 2048, 00:13:36.811 "data_size": 63488 00:13:36.811 } 00:13:36.811 ] 00:13:36.811 }' 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.811 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.377 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:37.377 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.377 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.377 [2024-10-17 20:10:22.888846] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.377 [2024-10-17 20:10:22.888881] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.377 [2024-10-17 20:10:22.892403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.377 [2024-10-17 20:10:22.892653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.377 [2024-10-17 20:10:22.892743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.377 [2024-10-17 20:10:22.892763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:37.377 { 00:13:37.377 "results": [ 00:13:37.377 { 00:13:37.377 "job": "raid_bdev1", 00:13:37.377 "core_mask": "0x1", 00:13:37.377 "workload": "randrw", 00:13:37.377 "percentage": 50, 00:13:37.377 "status": "finished", 00:13:37.377 "queue_depth": 1, 00:13:37.377 "io_size": 131072, 00:13:37.377 "runtime": 1.494275, 00:13:37.377 "iops": 10582.389453079253, 00:13:37.377 "mibps": 1322.7986816349066, 00:13:37.377 "io_failed": 1, 00:13:37.377 "io_timeout": 0, 00:13:37.377 "avg_latency_us": 131.58421927636041, 00:13:37.377 "min_latency_us": 38.167272727272724, 00:13:37.377 "max_latency_us": 1802.24 00:13:37.377 } 00:13:37.377 ], 00:13:37.377 "core_count": 1 00:13:37.377 } 00:13:37.377 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.377 20:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73085 00:13:37.377 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73085 ']' 00:13:37.377 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73085 00:13:37.377 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:37.378 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.378 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73085 00:13:37.378 killing process with pid 73085 00:13:37.378 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.378 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.378 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73085' 00:13:37.378 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73085 00:13:37.378 [2024-10-17 20:10:22.928526] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.378 20:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73085 00:13:37.636 [2024-10-17 20:10:23.200023] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.034 20:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XGIg9DN8fU 00:13:39.034 20:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:39.034 20:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:39.034 20:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.67 00:13:39.034 ************************************ 00:13:39.034 END TEST raid_write_error_test 00:13:39.034 ************************************ 00:13:39.034 20:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:39.034 20:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:39.034 20:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:39.034 20:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.67 != \0\.\0\0 ]] 00:13:39.034 00:13:39.034 real 0m4.928s 00:13:39.034 user 0m6.109s 00:13:39.034 sys 0m0.634s 00:13:39.034 20:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.034 20:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.034 20:10:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:39.034 20:10:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:39.034 20:10:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:39.034 20:10:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.034 20:10:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.034 ************************************ 00:13:39.034 START TEST raid_state_function_test 00:13:39.034 ************************************ 00:13:39.034 20:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:13:39.034 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:39.034 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:39.034 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:39.034 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:39.034 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73230 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:39.035 Process raid pid: 73230 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73230' 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73230 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73230 ']' 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.035 20:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.035 [2024-10-17 20:10:24.475509] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:13:39.035 [2024-10-17 20:10:24.475934] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.035 [2024-10-17 20:10:24.649478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.293 [2024-10-17 20:10:24.784546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.551 [2024-10-17 20:10:24.993240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.551 [2024-10-17 20:10:24.993425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.810 [2024-10-17 20:10:25.429804] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:39.810 [2024-10-17 20:10:25.429881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:39.810 [2024-10-17 20:10:25.429897] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.810 [2024-10-17 20:10:25.429912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.810 [2024-10-17 20:10:25.429921] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.810 [2024-10-17 20:10:25.429934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.810 [2024-10-17 20:10:25.429944] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:39.810 [2024-10-17 20:10:25.429957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.810 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.069 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.069 "name": "Existed_Raid", 00:13:40.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.069 "strip_size_kb": 0, 00:13:40.069 "state": "configuring", 00:13:40.069 "raid_level": "raid1", 00:13:40.069 "superblock": false, 00:13:40.069 "num_base_bdevs": 4, 00:13:40.069 "num_base_bdevs_discovered": 0, 00:13:40.069 "num_base_bdevs_operational": 4, 00:13:40.069 "base_bdevs_list": [ 00:13:40.069 { 00:13:40.069 "name": "BaseBdev1", 00:13:40.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.069 "is_configured": false, 00:13:40.069 "data_offset": 0, 00:13:40.069 "data_size": 0 00:13:40.069 }, 00:13:40.069 { 00:13:40.069 "name": "BaseBdev2", 00:13:40.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.069 "is_configured": false, 00:13:40.069 "data_offset": 0, 00:13:40.069 "data_size": 0 00:13:40.069 }, 00:13:40.069 { 00:13:40.069 "name": "BaseBdev3", 00:13:40.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.069 "is_configured": false, 00:13:40.069 "data_offset": 0, 00:13:40.069 "data_size": 0 00:13:40.069 }, 00:13:40.069 { 00:13:40.069 "name": "BaseBdev4", 00:13:40.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.069 "is_configured": false, 00:13:40.069 "data_offset": 0, 00:13:40.069 "data_size": 0 00:13:40.069 } 00:13:40.069 ] 00:13:40.069 }' 00:13:40.069 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.069 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.327 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:40.327 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.327 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.327 [2024-10-17 20:10:25.949883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:40.327 [2024-10-17 20:10:25.949930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:40.327 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.327 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:40.327 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.327 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.327 [2024-10-17 20:10:25.957888] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.327 [2024-10-17 20:10:25.957956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.327 [2024-10-17 20:10:25.957970] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.327 [2024-10-17 20:10:25.957985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.327 [2024-10-17 20:10:25.957995] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.327 [2024-10-17 20:10:25.958035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.327 [2024-10-17 20:10:25.958047] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:40.327 [2024-10-17 20:10:25.958062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:40.327 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.328 20:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:40.328 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.328 20:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.586 [2024-10-17 20:10:26.004897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.586 BaseBdev1 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.586 [ 00:13:40.586 { 00:13:40.586 "name": "BaseBdev1", 00:13:40.586 "aliases": [ 00:13:40.586 "4c10daba-9239-408b-929d-2c446ddfcae2" 00:13:40.586 ], 00:13:40.586 "product_name": "Malloc disk", 00:13:40.586 "block_size": 512, 00:13:40.586 "num_blocks": 65536, 00:13:40.586 "uuid": "4c10daba-9239-408b-929d-2c446ddfcae2", 00:13:40.586 "assigned_rate_limits": { 00:13:40.586 "rw_ios_per_sec": 0, 00:13:40.586 "rw_mbytes_per_sec": 0, 00:13:40.586 "r_mbytes_per_sec": 0, 00:13:40.586 "w_mbytes_per_sec": 0 00:13:40.586 }, 00:13:40.586 "claimed": true, 00:13:40.586 "claim_type": "exclusive_write", 00:13:40.586 "zoned": false, 00:13:40.586 "supported_io_types": { 00:13:40.586 "read": true, 00:13:40.586 "write": true, 00:13:40.586 "unmap": true, 00:13:40.586 "flush": true, 00:13:40.586 "reset": true, 00:13:40.586 "nvme_admin": false, 00:13:40.586 "nvme_io": false, 00:13:40.586 "nvme_io_md": false, 00:13:40.586 "write_zeroes": true, 00:13:40.586 "zcopy": true, 00:13:40.586 "get_zone_info": false, 00:13:40.586 "zone_management": false, 00:13:40.586 "zone_append": false, 00:13:40.586 "compare": false, 00:13:40.586 "compare_and_write": false, 00:13:40.586 "abort": true, 00:13:40.586 "seek_hole": false, 00:13:40.586 "seek_data": false, 00:13:40.586 "copy": true, 00:13:40.586 "nvme_iov_md": false 00:13:40.586 }, 00:13:40.586 "memory_domains": [ 00:13:40.586 { 00:13:40.586 "dma_device_id": "system", 00:13:40.586 "dma_device_type": 1 00:13:40.586 }, 00:13:40.586 { 00:13:40.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.586 "dma_device_type": 2 00:13:40.586 } 00:13:40.586 ], 00:13:40.586 "driver_specific": {} 00:13:40.586 } 00:13:40.586 ] 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.586 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.586 "name": "Existed_Raid", 00:13:40.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.586 "strip_size_kb": 0, 00:13:40.586 "state": "configuring", 00:13:40.586 "raid_level": "raid1", 00:13:40.586 "superblock": false, 00:13:40.586 "num_base_bdevs": 4, 00:13:40.586 "num_base_bdevs_discovered": 1, 00:13:40.586 "num_base_bdevs_operational": 4, 00:13:40.586 "base_bdevs_list": [ 00:13:40.586 { 00:13:40.586 "name": "BaseBdev1", 00:13:40.586 "uuid": "4c10daba-9239-408b-929d-2c446ddfcae2", 00:13:40.586 "is_configured": true, 00:13:40.586 "data_offset": 0, 00:13:40.586 "data_size": 65536 00:13:40.586 }, 00:13:40.586 { 00:13:40.586 "name": "BaseBdev2", 00:13:40.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.587 "is_configured": false, 00:13:40.587 "data_offset": 0, 00:13:40.587 "data_size": 0 00:13:40.587 }, 00:13:40.587 { 00:13:40.587 "name": "BaseBdev3", 00:13:40.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.587 "is_configured": false, 00:13:40.587 "data_offset": 0, 00:13:40.587 "data_size": 0 00:13:40.587 }, 00:13:40.587 { 00:13:40.587 "name": "BaseBdev4", 00:13:40.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.587 "is_configured": false, 00:13:40.587 "data_offset": 0, 00:13:40.587 "data_size": 0 00:13:40.587 } 00:13:40.587 ] 00:13:40.587 }' 00:13:40.587 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.587 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.153 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:41.153 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.153 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.153 [2024-10-17 20:10:26.561146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.153 [2024-10-17 20:10:26.561213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:41.153 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.153 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:41.153 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.153 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.153 [2024-10-17 20:10:26.569185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.153 [2024-10-17 20:10:26.571691] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:41.153 [2024-10-17 20:10:26.571771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:41.153 [2024-10-17 20:10:26.571802] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:41.153 [2024-10-17 20:10:26.571819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:41.153 [2024-10-17 20:10:26.571829] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:41.153 [2024-10-17 20:10:26.571848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.154 "name": "Existed_Raid", 00:13:41.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.154 "strip_size_kb": 0, 00:13:41.154 "state": "configuring", 00:13:41.154 "raid_level": "raid1", 00:13:41.154 "superblock": false, 00:13:41.154 "num_base_bdevs": 4, 00:13:41.154 "num_base_bdevs_discovered": 1, 00:13:41.154 "num_base_bdevs_operational": 4, 00:13:41.154 "base_bdevs_list": [ 00:13:41.154 { 00:13:41.154 "name": "BaseBdev1", 00:13:41.154 "uuid": "4c10daba-9239-408b-929d-2c446ddfcae2", 00:13:41.154 "is_configured": true, 00:13:41.154 "data_offset": 0, 00:13:41.154 "data_size": 65536 00:13:41.154 }, 00:13:41.154 { 00:13:41.154 "name": "BaseBdev2", 00:13:41.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.154 "is_configured": false, 00:13:41.154 "data_offset": 0, 00:13:41.154 "data_size": 0 00:13:41.154 }, 00:13:41.154 { 00:13:41.154 "name": "BaseBdev3", 00:13:41.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.154 "is_configured": false, 00:13:41.154 "data_offset": 0, 00:13:41.154 "data_size": 0 00:13:41.154 }, 00:13:41.154 { 00:13:41.154 "name": "BaseBdev4", 00:13:41.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.154 "is_configured": false, 00:13:41.154 "data_offset": 0, 00:13:41.154 "data_size": 0 00:13:41.154 } 00:13:41.154 ] 00:13:41.154 }' 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.154 20:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.747 [2024-10-17 20:10:27.131129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.747 BaseBdev2 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.747 [ 00:13:41.747 { 00:13:41.747 "name": "BaseBdev2", 00:13:41.747 "aliases": [ 00:13:41.747 "b80fca7a-917a-4e48-89e9-dec97db54a1b" 00:13:41.747 ], 00:13:41.747 "product_name": "Malloc disk", 00:13:41.747 "block_size": 512, 00:13:41.747 "num_blocks": 65536, 00:13:41.747 "uuid": "b80fca7a-917a-4e48-89e9-dec97db54a1b", 00:13:41.747 "assigned_rate_limits": { 00:13:41.747 "rw_ios_per_sec": 0, 00:13:41.747 "rw_mbytes_per_sec": 0, 00:13:41.747 "r_mbytes_per_sec": 0, 00:13:41.747 "w_mbytes_per_sec": 0 00:13:41.747 }, 00:13:41.747 "claimed": true, 00:13:41.747 "claim_type": "exclusive_write", 00:13:41.747 "zoned": false, 00:13:41.747 "supported_io_types": { 00:13:41.747 "read": true, 00:13:41.747 "write": true, 00:13:41.747 "unmap": true, 00:13:41.747 "flush": true, 00:13:41.747 "reset": true, 00:13:41.747 "nvme_admin": false, 00:13:41.747 "nvme_io": false, 00:13:41.747 "nvme_io_md": false, 00:13:41.747 "write_zeroes": true, 00:13:41.747 "zcopy": true, 00:13:41.747 "get_zone_info": false, 00:13:41.747 "zone_management": false, 00:13:41.747 "zone_append": false, 00:13:41.747 "compare": false, 00:13:41.747 "compare_and_write": false, 00:13:41.747 "abort": true, 00:13:41.747 "seek_hole": false, 00:13:41.747 "seek_data": false, 00:13:41.747 "copy": true, 00:13:41.747 "nvme_iov_md": false 00:13:41.747 }, 00:13:41.747 "memory_domains": [ 00:13:41.747 { 00:13:41.747 "dma_device_id": "system", 00:13:41.747 "dma_device_type": 1 00:13:41.747 }, 00:13:41.747 { 00:13:41.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.747 "dma_device_type": 2 00:13:41.747 } 00:13:41.747 ], 00:13:41.747 "driver_specific": {} 00:13:41.747 } 00:13:41.747 ] 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.747 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.747 "name": "Existed_Raid", 00:13:41.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.747 "strip_size_kb": 0, 00:13:41.747 "state": "configuring", 00:13:41.747 "raid_level": "raid1", 00:13:41.747 "superblock": false, 00:13:41.747 "num_base_bdevs": 4, 00:13:41.747 "num_base_bdevs_discovered": 2, 00:13:41.747 "num_base_bdevs_operational": 4, 00:13:41.747 "base_bdevs_list": [ 00:13:41.747 { 00:13:41.747 "name": "BaseBdev1", 00:13:41.747 "uuid": "4c10daba-9239-408b-929d-2c446ddfcae2", 00:13:41.748 "is_configured": true, 00:13:41.748 "data_offset": 0, 00:13:41.748 "data_size": 65536 00:13:41.748 }, 00:13:41.748 { 00:13:41.748 "name": "BaseBdev2", 00:13:41.748 "uuid": "b80fca7a-917a-4e48-89e9-dec97db54a1b", 00:13:41.748 "is_configured": true, 00:13:41.748 "data_offset": 0, 00:13:41.748 "data_size": 65536 00:13:41.748 }, 00:13:41.748 { 00:13:41.748 "name": "BaseBdev3", 00:13:41.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.748 "is_configured": false, 00:13:41.748 "data_offset": 0, 00:13:41.748 "data_size": 0 00:13:41.748 }, 00:13:41.748 { 00:13:41.748 "name": "BaseBdev4", 00:13:41.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.748 "is_configured": false, 00:13:41.748 "data_offset": 0, 00:13:41.748 "data_size": 0 00:13:41.748 } 00:13:41.748 ] 00:13:41.748 }' 00:13:41.748 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.748 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.316 [2024-10-17 20:10:27.734440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.316 BaseBdev3 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.316 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.316 [ 00:13:42.316 { 00:13:42.316 "name": "BaseBdev3", 00:13:42.316 "aliases": [ 00:13:42.316 "93dce7bf-1869-444d-b790-ee965ab7202c" 00:13:42.316 ], 00:13:42.316 "product_name": "Malloc disk", 00:13:42.316 "block_size": 512, 00:13:42.316 "num_blocks": 65536, 00:13:42.316 "uuid": "93dce7bf-1869-444d-b790-ee965ab7202c", 00:13:42.316 "assigned_rate_limits": { 00:13:42.316 "rw_ios_per_sec": 0, 00:13:42.316 "rw_mbytes_per_sec": 0, 00:13:42.316 "r_mbytes_per_sec": 0, 00:13:42.316 "w_mbytes_per_sec": 0 00:13:42.316 }, 00:13:42.316 "claimed": true, 00:13:42.316 "claim_type": "exclusive_write", 00:13:42.316 "zoned": false, 00:13:42.316 "supported_io_types": { 00:13:42.316 "read": true, 00:13:42.316 "write": true, 00:13:42.316 "unmap": true, 00:13:42.316 "flush": true, 00:13:42.316 "reset": true, 00:13:42.317 "nvme_admin": false, 00:13:42.317 "nvme_io": false, 00:13:42.317 "nvme_io_md": false, 00:13:42.317 "write_zeroes": true, 00:13:42.317 "zcopy": true, 00:13:42.317 "get_zone_info": false, 00:13:42.317 "zone_management": false, 00:13:42.317 "zone_append": false, 00:13:42.317 "compare": false, 00:13:42.317 "compare_and_write": false, 00:13:42.317 "abort": true, 00:13:42.317 "seek_hole": false, 00:13:42.317 "seek_data": false, 00:13:42.317 "copy": true, 00:13:42.317 "nvme_iov_md": false 00:13:42.317 }, 00:13:42.317 "memory_domains": [ 00:13:42.317 { 00:13:42.317 "dma_device_id": "system", 00:13:42.317 "dma_device_type": 1 00:13:42.317 }, 00:13:42.317 { 00:13:42.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.317 "dma_device_type": 2 00:13:42.317 } 00:13:42.317 ], 00:13:42.317 "driver_specific": {} 00:13:42.317 } 00:13:42.317 ] 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.317 "name": "Existed_Raid", 00:13:42.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.317 "strip_size_kb": 0, 00:13:42.317 "state": "configuring", 00:13:42.317 "raid_level": "raid1", 00:13:42.317 "superblock": false, 00:13:42.317 "num_base_bdevs": 4, 00:13:42.317 "num_base_bdevs_discovered": 3, 00:13:42.317 "num_base_bdevs_operational": 4, 00:13:42.317 "base_bdevs_list": [ 00:13:42.317 { 00:13:42.317 "name": "BaseBdev1", 00:13:42.317 "uuid": "4c10daba-9239-408b-929d-2c446ddfcae2", 00:13:42.317 "is_configured": true, 00:13:42.317 "data_offset": 0, 00:13:42.317 "data_size": 65536 00:13:42.317 }, 00:13:42.317 { 00:13:42.317 "name": "BaseBdev2", 00:13:42.317 "uuid": "b80fca7a-917a-4e48-89e9-dec97db54a1b", 00:13:42.317 "is_configured": true, 00:13:42.317 "data_offset": 0, 00:13:42.317 "data_size": 65536 00:13:42.317 }, 00:13:42.317 { 00:13:42.317 "name": "BaseBdev3", 00:13:42.317 "uuid": "93dce7bf-1869-444d-b790-ee965ab7202c", 00:13:42.317 "is_configured": true, 00:13:42.317 "data_offset": 0, 00:13:42.317 "data_size": 65536 00:13:42.317 }, 00:13:42.317 { 00:13:42.317 "name": "BaseBdev4", 00:13:42.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.317 "is_configured": false, 00:13:42.317 "data_offset": 0, 00:13:42.317 "data_size": 0 00:13:42.317 } 00:13:42.317 ] 00:13:42.317 }' 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.317 20:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.884 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:42.884 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.884 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.884 [2024-10-17 20:10:28.323272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:42.884 [2024-10-17 20:10:28.323347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:42.884 [2024-10-17 20:10:28.323361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:42.884 [2024-10-17 20:10:28.323763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:42.884 [2024-10-17 20:10:28.323980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:42.884 [2024-10-17 20:10:28.324051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:42.884 [2024-10-17 20:10:28.324382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.884 BaseBdev4 00:13:42.884 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.884 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:42.884 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.885 [ 00:13:42.885 { 00:13:42.885 "name": "BaseBdev4", 00:13:42.885 "aliases": [ 00:13:42.885 "57c19b1c-e10f-4e85-9f5d-a33030c4f674" 00:13:42.885 ], 00:13:42.885 "product_name": "Malloc disk", 00:13:42.885 "block_size": 512, 00:13:42.885 "num_blocks": 65536, 00:13:42.885 "uuid": "57c19b1c-e10f-4e85-9f5d-a33030c4f674", 00:13:42.885 "assigned_rate_limits": { 00:13:42.885 "rw_ios_per_sec": 0, 00:13:42.885 "rw_mbytes_per_sec": 0, 00:13:42.885 "r_mbytes_per_sec": 0, 00:13:42.885 "w_mbytes_per_sec": 0 00:13:42.885 }, 00:13:42.885 "claimed": true, 00:13:42.885 "claim_type": "exclusive_write", 00:13:42.885 "zoned": false, 00:13:42.885 "supported_io_types": { 00:13:42.885 "read": true, 00:13:42.885 "write": true, 00:13:42.885 "unmap": true, 00:13:42.885 "flush": true, 00:13:42.885 "reset": true, 00:13:42.885 "nvme_admin": false, 00:13:42.885 "nvme_io": false, 00:13:42.885 "nvme_io_md": false, 00:13:42.885 "write_zeroes": true, 00:13:42.885 "zcopy": true, 00:13:42.885 "get_zone_info": false, 00:13:42.885 "zone_management": false, 00:13:42.885 "zone_append": false, 00:13:42.885 "compare": false, 00:13:42.885 "compare_and_write": false, 00:13:42.885 "abort": true, 00:13:42.885 "seek_hole": false, 00:13:42.885 "seek_data": false, 00:13:42.885 "copy": true, 00:13:42.885 "nvme_iov_md": false 00:13:42.885 }, 00:13:42.885 "memory_domains": [ 00:13:42.885 { 00:13:42.885 "dma_device_id": "system", 00:13:42.885 "dma_device_type": 1 00:13:42.885 }, 00:13:42.885 { 00:13:42.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.885 "dma_device_type": 2 00:13:42.885 } 00:13:42.885 ], 00:13:42.885 "driver_specific": {} 00:13:42.885 } 00:13:42.885 ] 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.885 "name": "Existed_Raid", 00:13:42.885 "uuid": "22321f82-6cdc-4d02-85f9-b1f0ec3f504e", 00:13:42.885 "strip_size_kb": 0, 00:13:42.885 "state": "online", 00:13:42.885 "raid_level": "raid1", 00:13:42.885 "superblock": false, 00:13:42.885 "num_base_bdevs": 4, 00:13:42.885 "num_base_bdevs_discovered": 4, 00:13:42.885 "num_base_bdevs_operational": 4, 00:13:42.885 "base_bdevs_list": [ 00:13:42.885 { 00:13:42.885 "name": "BaseBdev1", 00:13:42.885 "uuid": "4c10daba-9239-408b-929d-2c446ddfcae2", 00:13:42.885 "is_configured": true, 00:13:42.885 "data_offset": 0, 00:13:42.885 "data_size": 65536 00:13:42.885 }, 00:13:42.885 { 00:13:42.885 "name": "BaseBdev2", 00:13:42.885 "uuid": "b80fca7a-917a-4e48-89e9-dec97db54a1b", 00:13:42.885 "is_configured": true, 00:13:42.885 "data_offset": 0, 00:13:42.885 "data_size": 65536 00:13:42.885 }, 00:13:42.885 { 00:13:42.885 "name": "BaseBdev3", 00:13:42.885 "uuid": "93dce7bf-1869-444d-b790-ee965ab7202c", 00:13:42.885 "is_configured": true, 00:13:42.885 "data_offset": 0, 00:13:42.885 "data_size": 65536 00:13:42.885 }, 00:13:42.885 { 00:13:42.885 "name": "BaseBdev4", 00:13:42.885 "uuid": "57c19b1c-e10f-4e85-9f5d-a33030c4f674", 00:13:42.885 "is_configured": true, 00:13:42.885 "data_offset": 0, 00:13:42.885 "data_size": 65536 00:13:42.885 } 00:13:42.885 ] 00:13:42.885 }' 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.885 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.454 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:43.454 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:43.454 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:43.454 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:43.454 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:43.454 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:43.454 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:43.454 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.454 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:43.454 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.454 [2024-10-17 20:10:28.891972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.454 20:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.454 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:43.454 "name": "Existed_Raid", 00:13:43.454 "aliases": [ 00:13:43.454 "22321f82-6cdc-4d02-85f9-b1f0ec3f504e" 00:13:43.454 ], 00:13:43.454 "product_name": "Raid Volume", 00:13:43.454 "block_size": 512, 00:13:43.454 "num_blocks": 65536, 00:13:43.454 "uuid": "22321f82-6cdc-4d02-85f9-b1f0ec3f504e", 00:13:43.454 "assigned_rate_limits": { 00:13:43.454 "rw_ios_per_sec": 0, 00:13:43.454 "rw_mbytes_per_sec": 0, 00:13:43.454 "r_mbytes_per_sec": 0, 00:13:43.454 "w_mbytes_per_sec": 0 00:13:43.454 }, 00:13:43.454 "claimed": false, 00:13:43.454 "zoned": false, 00:13:43.454 "supported_io_types": { 00:13:43.454 "read": true, 00:13:43.454 "write": true, 00:13:43.454 "unmap": false, 00:13:43.454 "flush": false, 00:13:43.454 "reset": true, 00:13:43.454 "nvme_admin": false, 00:13:43.454 "nvme_io": false, 00:13:43.454 "nvme_io_md": false, 00:13:43.454 "write_zeroes": true, 00:13:43.454 "zcopy": false, 00:13:43.454 "get_zone_info": false, 00:13:43.454 "zone_management": false, 00:13:43.454 "zone_append": false, 00:13:43.454 "compare": false, 00:13:43.454 "compare_and_write": false, 00:13:43.454 "abort": false, 00:13:43.454 "seek_hole": false, 00:13:43.454 "seek_data": false, 00:13:43.454 "copy": false, 00:13:43.454 "nvme_iov_md": false 00:13:43.454 }, 00:13:43.454 "memory_domains": [ 00:13:43.454 { 00:13:43.454 "dma_device_id": "system", 00:13:43.454 "dma_device_type": 1 00:13:43.454 }, 00:13:43.454 { 00:13:43.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.454 "dma_device_type": 2 00:13:43.454 }, 00:13:43.454 { 00:13:43.454 "dma_device_id": "system", 00:13:43.454 "dma_device_type": 1 00:13:43.454 }, 00:13:43.454 { 00:13:43.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.454 "dma_device_type": 2 00:13:43.454 }, 00:13:43.454 { 00:13:43.454 "dma_device_id": "system", 00:13:43.454 "dma_device_type": 1 00:13:43.454 }, 00:13:43.454 { 00:13:43.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.454 "dma_device_type": 2 00:13:43.454 }, 00:13:43.454 { 00:13:43.454 "dma_device_id": "system", 00:13:43.454 "dma_device_type": 1 00:13:43.454 }, 00:13:43.454 { 00:13:43.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.454 "dma_device_type": 2 00:13:43.454 } 00:13:43.454 ], 00:13:43.454 "driver_specific": { 00:13:43.454 "raid": { 00:13:43.454 "uuid": "22321f82-6cdc-4d02-85f9-b1f0ec3f504e", 00:13:43.454 "strip_size_kb": 0, 00:13:43.454 "state": "online", 00:13:43.454 "raid_level": "raid1", 00:13:43.454 "superblock": false, 00:13:43.454 "num_base_bdevs": 4, 00:13:43.454 "num_base_bdevs_discovered": 4, 00:13:43.454 "num_base_bdevs_operational": 4, 00:13:43.454 "base_bdevs_list": [ 00:13:43.454 { 00:13:43.454 "name": "BaseBdev1", 00:13:43.454 "uuid": "4c10daba-9239-408b-929d-2c446ddfcae2", 00:13:43.455 "is_configured": true, 00:13:43.455 "data_offset": 0, 00:13:43.455 "data_size": 65536 00:13:43.455 }, 00:13:43.455 { 00:13:43.455 "name": "BaseBdev2", 00:13:43.455 "uuid": "b80fca7a-917a-4e48-89e9-dec97db54a1b", 00:13:43.455 "is_configured": true, 00:13:43.455 "data_offset": 0, 00:13:43.455 "data_size": 65536 00:13:43.455 }, 00:13:43.455 { 00:13:43.455 "name": "BaseBdev3", 00:13:43.455 "uuid": "93dce7bf-1869-444d-b790-ee965ab7202c", 00:13:43.455 "is_configured": true, 00:13:43.455 "data_offset": 0, 00:13:43.455 "data_size": 65536 00:13:43.455 }, 00:13:43.455 { 00:13:43.455 "name": "BaseBdev4", 00:13:43.455 "uuid": "57c19b1c-e10f-4e85-9f5d-a33030c4f674", 00:13:43.455 "is_configured": true, 00:13:43.455 "data_offset": 0, 00:13:43.455 "data_size": 65536 00:13:43.455 } 00:13:43.455 ] 00:13:43.455 } 00:13:43.455 } 00:13:43.455 }' 00:13:43.455 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.455 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:43.455 BaseBdev2 00:13:43.455 BaseBdev3 00:13:43.455 BaseBdev4' 00:13:43.455 20:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.455 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.714 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.715 [2024-10-17 20:10:29.263667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.715 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.975 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.975 "name": "Existed_Raid", 00:13:43.975 "uuid": "22321f82-6cdc-4d02-85f9-b1f0ec3f504e", 00:13:43.975 "strip_size_kb": 0, 00:13:43.975 "state": "online", 00:13:43.975 "raid_level": "raid1", 00:13:43.975 "superblock": false, 00:13:43.975 "num_base_bdevs": 4, 00:13:43.975 "num_base_bdevs_discovered": 3, 00:13:43.975 "num_base_bdevs_operational": 3, 00:13:43.975 "base_bdevs_list": [ 00:13:43.975 { 00:13:43.975 "name": null, 00:13:43.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.975 "is_configured": false, 00:13:43.975 "data_offset": 0, 00:13:43.975 "data_size": 65536 00:13:43.975 }, 00:13:43.975 { 00:13:43.975 "name": "BaseBdev2", 00:13:43.975 "uuid": "b80fca7a-917a-4e48-89e9-dec97db54a1b", 00:13:43.975 "is_configured": true, 00:13:43.975 "data_offset": 0, 00:13:43.975 "data_size": 65536 00:13:43.975 }, 00:13:43.975 { 00:13:43.975 "name": "BaseBdev3", 00:13:43.975 "uuid": "93dce7bf-1869-444d-b790-ee965ab7202c", 00:13:43.975 "is_configured": true, 00:13:43.975 "data_offset": 0, 00:13:43.975 "data_size": 65536 00:13:43.975 }, 00:13:43.975 { 00:13:43.975 "name": "BaseBdev4", 00:13:43.975 "uuid": "57c19b1c-e10f-4e85-9f5d-a33030c4f674", 00:13:43.975 "is_configured": true, 00:13:43.975 "data_offset": 0, 00:13:43.975 "data_size": 65536 00:13:43.975 } 00:13:43.975 ] 00:13:43.975 }' 00:13:43.975 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.975 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.233 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:44.233 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.492 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.492 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.492 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:44.492 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.492 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:44.492 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.492 20:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:44.492 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.492 20:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 [2024-10-17 20:10:29.940760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:44.492 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.492 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:44.492 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.492 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.492 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.492 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:44.492 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.492 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:44.492 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.492 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:44.492 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.492 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 [2024-10-17 20:10:30.085769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:44.751 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.751 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:44.751 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.751 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.751 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.751 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:44.751 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.751 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.751 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:44.751 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.751 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:44.751 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.751 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.751 [2024-10-17 20:10:30.236927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:44.751 [2024-10-17 20:10:30.237097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.751 [2024-10-17 20:10:30.317491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.752 [2024-10-17 20:10:30.317580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.752 [2024-10-17 20:10:30.317599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.752 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.013 BaseBdev2 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.013 [ 00:13:45.013 { 00:13:45.013 "name": "BaseBdev2", 00:13:45.013 "aliases": [ 00:13:45.013 "789d7f14-7dfa-4529-a12c-2ca2d9ac58c1" 00:13:45.013 ], 00:13:45.013 "product_name": "Malloc disk", 00:13:45.013 "block_size": 512, 00:13:45.013 "num_blocks": 65536, 00:13:45.013 "uuid": "789d7f14-7dfa-4529-a12c-2ca2d9ac58c1", 00:13:45.013 "assigned_rate_limits": { 00:13:45.013 "rw_ios_per_sec": 0, 00:13:45.013 "rw_mbytes_per_sec": 0, 00:13:45.013 "r_mbytes_per_sec": 0, 00:13:45.013 "w_mbytes_per_sec": 0 00:13:45.013 }, 00:13:45.013 "claimed": false, 00:13:45.013 "zoned": false, 00:13:45.013 "supported_io_types": { 00:13:45.013 "read": true, 00:13:45.013 "write": true, 00:13:45.013 "unmap": true, 00:13:45.013 "flush": true, 00:13:45.013 "reset": true, 00:13:45.013 "nvme_admin": false, 00:13:45.013 "nvme_io": false, 00:13:45.013 "nvme_io_md": false, 00:13:45.013 "write_zeroes": true, 00:13:45.013 "zcopy": true, 00:13:45.013 "get_zone_info": false, 00:13:45.013 "zone_management": false, 00:13:45.013 "zone_append": false, 00:13:45.013 "compare": false, 00:13:45.013 "compare_and_write": false, 00:13:45.013 "abort": true, 00:13:45.013 "seek_hole": false, 00:13:45.013 "seek_data": false, 00:13:45.013 "copy": true, 00:13:45.013 "nvme_iov_md": false 00:13:45.013 }, 00:13:45.013 "memory_domains": [ 00:13:45.013 { 00:13:45.013 "dma_device_id": "system", 00:13:45.013 "dma_device_type": 1 00:13:45.013 }, 00:13:45.013 { 00:13:45.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.013 "dma_device_type": 2 00:13:45.013 } 00:13:45.013 ], 00:13:45.013 "driver_specific": {} 00:13:45.013 } 00:13:45.013 ] 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.013 BaseBdev3 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.013 [ 00:13:45.013 { 00:13:45.013 "name": "BaseBdev3", 00:13:45.013 "aliases": [ 00:13:45.013 "21362af0-0a02-44d8-b649-679bb42daa85" 00:13:45.013 ], 00:13:45.013 "product_name": "Malloc disk", 00:13:45.013 "block_size": 512, 00:13:45.013 "num_blocks": 65536, 00:13:45.013 "uuid": "21362af0-0a02-44d8-b649-679bb42daa85", 00:13:45.013 "assigned_rate_limits": { 00:13:45.013 "rw_ios_per_sec": 0, 00:13:45.013 "rw_mbytes_per_sec": 0, 00:13:45.013 "r_mbytes_per_sec": 0, 00:13:45.013 "w_mbytes_per_sec": 0 00:13:45.013 }, 00:13:45.013 "claimed": false, 00:13:45.013 "zoned": false, 00:13:45.013 "supported_io_types": { 00:13:45.013 "read": true, 00:13:45.013 "write": true, 00:13:45.013 "unmap": true, 00:13:45.013 "flush": true, 00:13:45.013 "reset": true, 00:13:45.013 "nvme_admin": false, 00:13:45.013 "nvme_io": false, 00:13:45.013 "nvme_io_md": false, 00:13:45.013 "write_zeroes": true, 00:13:45.013 "zcopy": true, 00:13:45.013 "get_zone_info": false, 00:13:45.013 "zone_management": false, 00:13:45.013 "zone_append": false, 00:13:45.013 "compare": false, 00:13:45.013 "compare_and_write": false, 00:13:45.013 "abort": true, 00:13:45.013 "seek_hole": false, 00:13:45.013 "seek_data": false, 00:13:45.013 "copy": true, 00:13:45.013 "nvme_iov_md": false 00:13:45.013 }, 00:13:45.013 "memory_domains": [ 00:13:45.013 { 00:13:45.013 "dma_device_id": "system", 00:13:45.013 "dma_device_type": 1 00:13:45.013 }, 00:13:45.013 { 00:13:45.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.013 "dma_device_type": 2 00:13:45.013 } 00:13:45.013 ], 00:13:45.013 "driver_specific": {} 00:13:45.013 } 00:13:45.013 ] 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.013 BaseBdev4 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.013 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.014 [ 00:13:45.014 { 00:13:45.014 "name": "BaseBdev4", 00:13:45.014 "aliases": [ 00:13:45.014 "6caa6546-4312-4d6a-8bbd-734ceccd15ec" 00:13:45.014 ], 00:13:45.014 "product_name": "Malloc disk", 00:13:45.014 "block_size": 512, 00:13:45.014 "num_blocks": 65536, 00:13:45.014 "uuid": "6caa6546-4312-4d6a-8bbd-734ceccd15ec", 00:13:45.014 "assigned_rate_limits": { 00:13:45.014 "rw_ios_per_sec": 0, 00:13:45.014 "rw_mbytes_per_sec": 0, 00:13:45.014 "r_mbytes_per_sec": 0, 00:13:45.014 "w_mbytes_per_sec": 0 00:13:45.014 }, 00:13:45.014 "claimed": false, 00:13:45.014 "zoned": false, 00:13:45.014 "supported_io_types": { 00:13:45.014 "read": true, 00:13:45.014 "write": true, 00:13:45.014 "unmap": true, 00:13:45.014 "flush": true, 00:13:45.014 "reset": true, 00:13:45.014 "nvme_admin": false, 00:13:45.014 "nvme_io": false, 00:13:45.014 "nvme_io_md": false, 00:13:45.014 "write_zeroes": true, 00:13:45.014 "zcopy": true, 00:13:45.014 "get_zone_info": false, 00:13:45.014 "zone_management": false, 00:13:45.014 "zone_append": false, 00:13:45.014 "compare": false, 00:13:45.014 "compare_and_write": false, 00:13:45.014 "abort": true, 00:13:45.014 "seek_hole": false, 00:13:45.014 "seek_data": false, 00:13:45.014 "copy": true, 00:13:45.014 "nvme_iov_md": false 00:13:45.014 }, 00:13:45.014 "memory_domains": [ 00:13:45.014 { 00:13:45.014 "dma_device_id": "system", 00:13:45.014 "dma_device_type": 1 00:13:45.014 }, 00:13:45.014 { 00:13:45.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.014 "dma_device_type": 2 00:13:45.014 } 00:13:45.014 ], 00:13:45.014 "driver_specific": {} 00:13:45.014 } 00:13:45.014 ] 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.014 [2024-10-17 20:10:30.597723] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:45.014 [2024-10-17 20:10:30.597794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:45.014 [2024-10-17 20:10:30.597823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.014 [2024-10-17 20:10:30.600413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.014 [2024-10-17 20:10:30.600483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.014 "name": "Existed_Raid", 00:13:45.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.014 "strip_size_kb": 0, 00:13:45.014 "state": "configuring", 00:13:45.014 "raid_level": "raid1", 00:13:45.014 "superblock": false, 00:13:45.014 "num_base_bdevs": 4, 00:13:45.014 "num_base_bdevs_discovered": 3, 00:13:45.014 "num_base_bdevs_operational": 4, 00:13:45.014 "base_bdevs_list": [ 00:13:45.014 { 00:13:45.014 "name": "BaseBdev1", 00:13:45.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.014 "is_configured": false, 00:13:45.014 "data_offset": 0, 00:13:45.014 "data_size": 0 00:13:45.014 }, 00:13:45.014 { 00:13:45.014 "name": "BaseBdev2", 00:13:45.014 "uuid": "789d7f14-7dfa-4529-a12c-2ca2d9ac58c1", 00:13:45.014 "is_configured": true, 00:13:45.014 "data_offset": 0, 00:13:45.014 "data_size": 65536 00:13:45.014 }, 00:13:45.014 { 00:13:45.014 "name": "BaseBdev3", 00:13:45.014 "uuid": "21362af0-0a02-44d8-b649-679bb42daa85", 00:13:45.014 "is_configured": true, 00:13:45.014 "data_offset": 0, 00:13:45.014 "data_size": 65536 00:13:45.014 }, 00:13:45.014 { 00:13:45.014 "name": "BaseBdev4", 00:13:45.014 "uuid": "6caa6546-4312-4d6a-8bbd-734ceccd15ec", 00:13:45.014 "is_configured": true, 00:13:45.014 "data_offset": 0, 00:13:45.014 "data_size": 65536 00:13:45.014 } 00:13:45.014 ] 00:13:45.014 }' 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.014 20:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.589 [2024-10-17 20:10:31.133910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.589 "name": "Existed_Raid", 00:13:45.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.589 "strip_size_kb": 0, 00:13:45.589 "state": "configuring", 00:13:45.589 "raid_level": "raid1", 00:13:45.589 "superblock": false, 00:13:45.589 "num_base_bdevs": 4, 00:13:45.589 "num_base_bdevs_discovered": 2, 00:13:45.589 "num_base_bdevs_operational": 4, 00:13:45.589 "base_bdevs_list": [ 00:13:45.589 { 00:13:45.589 "name": "BaseBdev1", 00:13:45.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.589 "is_configured": false, 00:13:45.589 "data_offset": 0, 00:13:45.589 "data_size": 0 00:13:45.589 }, 00:13:45.589 { 00:13:45.589 "name": null, 00:13:45.589 "uuid": "789d7f14-7dfa-4529-a12c-2ca2d9ac58c1", 00:13:45.589 "is_configured": false, 00:13:45.589 "data_offset": 0, 00:13:45.589 "data_size": 65536 00:13:45.589 }, 00:13:45.589 { 00:13:45.589 "name": "BaseBdev3", 00:13:45.589 "uuid": "21362af0-0a02-44d8-b649-679bb42daa85", 00:13:45.589 "is_configured": true, 00:13:45.589 "data_offset": 0, 00:13:45.589 "data_size": 65536 00:13:45.589 }, 00:13:45.589 { 00:13:45.589 "name": "BaseBdev4", 00:13:45.589 "uuid": "6caa6546-4312-4d6a-8bbd-734ceccd15ec", 00:13:45.589 "is_configured": true, 00:13:45.589 "data_offset": 0, 00:13:45.589 "data_size": 65536 00:13:45.589 } 00:13:45.589 ] 00:13:45.589 }' 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.589 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.156 [2024-10-17 20:10:31.756484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.156 BaseBdev1 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.156 [ 00:13:46.156 { 00:13:46.156 "name": "BaseBdev1", 00:13:46.156 "aliases": [ 00:13:46.156 "aad774ba-6d2b-48ec-9d76-14e02fe49643" 00:13:46.156 ], 00:13:46.156 "product_name": "Malloc disk", 00:13:46.156 "block_size": 512, 00:13:46.156 "num_blocks": 65536, 00:13:46.156 "uuid": "aad774ba-6d2b-48ec-9d76-14e02fe49643", 00:13:46.156 "assigned_rate_limits": { 00:13:46.156 "rw_ios_per_sec": 0, 00:13:46.156 "rw_mbytes_per_sec": 0, 00:13:46.156 "r_mbytes_per_sec": 0, 00:13:46.156 "w_mbytes_per_sec": 0 00:13:46.156 }, 00:13:46.156 "claimed": true, 00:13:46.156 "claim_type": "exclusive_write", 00:13:46.156 "zoned": false, 00:13:46.156 "supported_io_types": { 00:13:46.156 "read": true, 00:13:46.156 "write": true, 00:13:46.156 "unmap": true, 00:13:46.156 "flush": true, 00:13:46.156 "reset": true, 00:13:46.156 "nvme_admin": false, 00:13:46.156 "nvme_io": false, 00:13:46.156 "nvme_io_md": false, 00:13:46.156 "write_zeroes": true, 00:13:46.156 "zcopy": true, 00:13:46.156 "get_zone_info": false, 00:13:46.156 "zone_management": false, 00:13:46.156 "zone_append": false, 00:13:46.156 "compare": false, 00:13:46.156 "compare_and_write": false, 00:13:46.156 "abort": true, 00:13:46.156 "seek_hole": false, 00:13:46.156 "seek_data": false, 00:13:46.156 "copy": true, 00:13:46.156 "nvme_iov_md": false 00:13:46.156 }, 00:13:46.156 "memory_domains": [ 00:13:46.156 { 00:13:46.156 "dma_device_id": "system", 00:13:46.156 "dma_device_type": 1 00:13:46.156 }, 00:13:46.156 { 00:13:46.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.156 "dma_device_type": 2 00:13:46.156 } 00:13:46.156 ], 00:13:46.156 "driver_specific": {} 00:13:46.156 } 00:13:46.156 ] 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.156 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.415 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.415 "name": "Existed_Raid", 00:13:46.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.415 "strip_size_kb": 0, 00:13:46.415 "state": "configuring", 00:13:46.415 "raid_level": "raid1", 00:13:46.415 "superblock": false, 00:13:46.415 "num_base_bdevs": 4, 00:13:46.415 "num_base_bdevs_discovered": 3, 00:13:46.415 "num_base_bdevs_operational": 4, 00:13:46.415 "base_bdevs_list": [ 00:13:46.415 { 00:13:46.415 "name": "BaseBdev1", 00:13:46.415 "uuid": "aad774ba-6d2b-48ec-9d76-14e02fe49643", 00:13:46.415 "is_configured": true, 00:13:46.415 "data_offset": 0, 00:13:46.415 "data_size": 65536 00:13:46.415 }, 00:13:46.415 { 00:13:46.415 "name": null, 00:13:46.415 "uuid": "789d7f14-7dfa-4529-a12c-2ca2d9ac58c1", 00:13:46.415 "is_configured": false, 00:13:46.415 "data_offset": 0, 00:13:46.415 "data_size": 65536 00:13:46.415 }, 00:13:46.415 { 00:13:46.415 "name": "BaseBdev3", 00:13:46.415 "uuid": "21362af0-0a02-44d8-b649-679bb42daa85", 00:13:46.415 "is_configured": true, 00:13:46.415 "data_offset": 0, 00:13:46.415 "data_size": 65536 00:13:46.415 }, 00:13:46.415 { 00:13:46.415 "name": "BaseBdev4", 00:13:46.415 "uuid": "6caa6546-4312-4d6a-8bbd-734ceccd15ec", 00:13:46.415 "is_configured": true, 00:13:46.415 "data_offset": 0, 00:13:46.415 "data_size": 65536 00:13:46.415 } 00:13:46.415 ] 00:13:46.415 }' 00:13:46.415 20:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.415 20:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.674 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.674 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.674 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.674 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:46.674 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.932 [2024-10-17 20:10:32.348746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.932 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.932 "name": "Existed_Raid", 00:13:46.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.932 "strip_size_kb": 0, 00:13:46.932 "state": "configuring", 00:13:46.932 "raid_level": "raid1", 00:13:46.932 "superblock": false, 00:13:46.932 "num_base_bdevs": 4, 00:13:46.932 "num_base_bdevs_discovered": 2, 00:13:46.932 "num_base_bdevs_operational": 4, 00:13:46.932 "base_bdevs_list": [ 00:13:46.932 { 00:13:46.932 "name": "BaseBdev1", 00:13:46.932 "uuid": "aad774ba-6d2b-48ec-9d76-14e02fe49643", 00:13:46.932 "is_configured": true, 00:13:46.932 "data_offset": 0, 00:13:46.932 "data_size": 65536 00:13:46.932 }, 00:13:46.932 { 00:13:46.932 "name": null, 00:13:46.932 "uuid": "789d7f14-7dfa-4529-a12c-2ca2d9ac58c1", 00:13:46.932 "is_configured": false, 00:13:46.932 "data_offset": 0, 00:13:46.932 "data_size": 65536 00:13:46.932 }, 00:13:46.932 { 00:13:46.932 "name": null, 00:13:46.932 "uuid": "21362af0-0a02-44d8-b649-679bb42daa85", 00:13:46.932 "is_configured": false, 00:13:46.932 "data_offset": 0, 00:13:46.932 "data_size": 65536 00:13:46.932 }, 00:13:46.932 { 00:13:46.933 "name": "BaseBdev4", 00:13:46.933 "uuid": "6caa6546-4312-4d6a-8bbd-734ceccd15ec", 00:13:46.933 "is_configured": true, 00:13:46.933 "data_offset": 0, 00:13:46.933 "data_size": 65536 00:13:46.933 } 00:13:46.933 ] 00:13:46.933 }' 00:13:46.933 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.933 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.499 [2024-10-17 20:10:32.940889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.499 "name": "Existed_Raid", 00:13:47.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.499 "strip_size_kb": 0, 00:13:47.499 "state": "configuring", 00:13:47.499 "raid_level": "raid1", 00:13:47.499 "superblock": false, 00:13:47.499 "num_base_bdevs": 4, 00:13:47.499 "num_base_bdevs_discovered": 3, 00:13:47.499 "num_base_bdevs_operational": 4, 00:13:47.499 "base_bdevs_list": [ 00:13:47.499 { 00:13:47.499 "name": "BaseBdev1", 00:13:47.499 "uuid": "aad774ba-6d2b-48ec-9d76-14e02fe49643", 00:13:47.499 "is_configured": true, 00:13:47.499 "data_offset": 0, 00:13:47.499 "data_size": 65536 00:13:47.499 }, 00:13:47.499 { 00:13:47.499 "name": null, 00:13:47.499 "uuid": "789d7f14-7dfa-4529-a12c-2ca2d9ac58c1", 00:13:47.499 "is_configured": false, 00:13:47.499 "data_offset": 0, 00:13:47.499 "data_size": 65536 00:13:47.499 }, 00:13:47.499 { 00:13:47.499 "name": "BaseBdev3", 00:13:47.499 "uuid": "21362af0-0a02-44d8-b649-679bb42daa85", 00:13:47.499 "is_configured": true, 00:13:47.499 "data_offset": 0, 00:13:47.499 "data_size": 65536 00:13:47.499 }, 00:13:47.499 { 00:13:47.499 "name": "BaseBdev4", 00:13:47.499 "uuid": "6caa6546-4312-4d6a-8bbd-734ceccd15ec", 00:13:47.499 "is_configured": true, 00:13:47.499 "data_offset": 0, 00:13:47.499 "data_size": 65536 00:13:47.499 } 00:13:47.499 ] 00:13:47.499 }' 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.499 20:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.066 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.067 [2024-10-17 20:10:33.525087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.067 "name": "Existed_Raid", 00:13:48.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.067 "strip_size_kb": 0, 00:13:48.067 "state": "configuring", 00:13:48.067 "raid_level": "raid1", 00:13:48.067 "superblock": false, 00:13:48.067 "num_base_bdevs": 4, 00:13:48.067 "num_base_bdevs_discovered": 2, 00:13:48.067 "num_base_bdevs_operational": 4, 00:13:48.067 "base_bdevs_list": [ 00:13:48.067 { 00:13:48.067 "name": null, 00:13:48.067 "uuid": "aad774ba-6d2b-48ec-9d76-14e02fe49643", 00:13:48.067 "is_configured": false, 00:13:48.067 "data_offset": 0, 00:13:48.067 "data_size": 65536 00:13:48.067 }, 00:13:48.067 { 00:13:48.067 "name": null, 00:13:48.067 "uuid": "789d7f14-7dfa-4529-a12c-2ca2d9ac58c1", 00:13:48.067 "is_configured": false, 00:13:48.067 "data_offset": 0, 00:13:48.067 "data_size": 65536 00:13:48.067 }, 00:13:48.067 { 00:13:48.067 "name": "BaseBdev3", 00:13:48.067 "uuid": "21362af0-0a02-44d8-b649-679bb42daa85", 00:13:48.067 "is_configured": true, 00:13:48.067 "data_offset": 0, 00:13:48.067 "data_size": 65536 00:13:48.067 }, 00:13:48.067 { 00:13:48.067 "name": "BaseBdev4", 00:13:48.067 "uuid": "6caa6546-4312-4d6a-8bbd-734ceccd15ec", 00:13:48.067 "is_configured": true, 00:13:48.067 "data_offset": 0, 00:13:48.067 "data_size": 65536 00:13:48.067 } 00:13:48.067 ] 00:13:48.067 }' 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.067 20:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.651 [2024-10-17 20:10:34.162711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.651 "name": "Existed_Raid", 00:13:48.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.651 "strip_size_kb": 0, 00:13:48.651 "state": "configuring", 00:13:48.651 "raid_level": "raid1", 00:13:48.651 "superblock": false, 00:13:48.651 "num_base_bdevs": 4, 00:13:48.651 "num_base_bdevs_discovered": 3, 00:13:48.651 "num_base_bdevs_operational": 4, 00:13:48.651 "base_bdevs_list": [ 00:13:48.651 { 00:13:48.651 "name": null, 00:13:48.651 "uuid": "aad774ba-6d2b-48ec-9d76-14e02fe49643", 00:13:48.651 "is_configured": false, 00:13:48.651 "data_offset": 0, 00:13:48.651 "data_size": 65536 00:13:48.651 }, 00:13:48.651 { 00:13:48.651 "name": "BaseBdev2", 00:13:48.651 "uuid": "789d7f14-7dfa-4529-a12c-2ca2d9ac58c1", 00:13:48.651 "is_configured": true, 00:13:48.651 "data_offset": 0, 00:13:48.651 "data_size": 65536 00:13:48.651 }, 00:13:48.651 { 00:13:48.651 "name": "BaseBdev3", 00:13:48.651 "uuid": "21362af0-0a02-44d8-b649-679bb42daa85", 00:13:48.651 "is_configured": true, 00:13:48.651 "data_offset": 0, 00:13:48.651 "data_size": 65536 00:13:48.651 }, 00:13:48.651 { 00:13:48.651 "name": "BaseBdev4", 00:13:48.651 "uuid": "6caa6546-4312-4d6a-8bbd-734ceccd15ec", 00:13:48.651 "is_configured": true, 00:13:48.651 "data_offset": 0, 00:13:48.651 "data_size": 65536 00:13:48.651 } 00:13:48.651 ] 00:13:48.651 }' 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.651 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aad774ba-6d2b-48ec-9d76-14e02fe49643 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.217 [2024-10-17 20:10:34.776564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:49.217 [2024-10-17 20:10:34.776626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:49.217 [2024-10-17 20:10:34.776642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:49.217 [2024-10-17 20:10:34.776966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:49.217 [2024-10-17 20:10:34.777209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:49.217 [2024-10-17 20:10:34.777226] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:49.217 [2024-10-17 20:10:34.777522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.217 NewBaseBdev 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.217 [ 00:13:49.217 { 00:13:49.217 "name": "NewBaseBdev", 00:13:49.217 "aliases": [ 00:13:49.217 "aad774ba-6d2b-48ec-9d76-14e02fe49643" 00:13:49.217 ], 00:13:49.217 "product_name": "Malloc disk", 00:13:49.217 "block_size": 512, 00:13:49.217 "num_blocks": 65536, 00:13:49.217 "uuid": "aad774ba-6d2b-48ec-9d76-14e02fe49643", 00:13:49.217 "assigned_rate_limits": { 00:13:49.217 "rw_ios_per_sec": 0, 00:13:49.217 "rw_mbytes_per_sec": 0, 00:13:49.217 "r_mbytes_per_sec": 0, 00:13:49.217 "w_mbytes_per_sec": 0 00:13:49.217 }, 00:13:49.217 "claimed": true, 00:13:49.217 "claim_type": "exclusive_write", 00:13:49.217 "zoned": false, 00:13:49.217 "supported_io_types": { 00:13:49.217 "read": true, 00:13:49.217 "write": true, 00:13:49.217 "unmap": true, 00:13:49.217 "flush": true, 00:13:49.217 "reset": true, 00:13:49.217 "nvme_admin": false, 00:13:49.217 "nvme_io": false, 00:13:49.217 "nvme_io_md": false, 00:13:49.217 "write_zeroes": true, 00:13:49.217 "zcopy": true, 00:13:49.217 "get_zone_info": false, 00:13:49.217 "zone_management": false, 00:13:49.217 "zone_append": false, 00:13:49.217 "compare": false, 00:13:49.217 "compare_and_write": false, 00:13:49.217 "abort": true, 00:13:49.217 "seek_hole": false, 00:13:49.217 "seek_data": false, 00:13:49.217 "copy": true, 00:13:49.217 "nvme_iov_md": false 00:13:49.217 }, 00:13:49.217 "memory_domains": [ 00:13:49.217 { 00:13:49.217 "dma_device_id": "system", 00:13:49.217 "dma_device_type": 1 00:13:49.217 }, 00:13:49.217 { 00:13:49.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.217 "dma_device_type": 2 00:13:49.217 } 00:13:49.217 ], 00:13:49.217 "driver_specific": {} 00:13:49.217 } 00:13:49.217 ] 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.217 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.217 "name": "Existed_Raid", 00:13:49.217 "uuid": "3999ccf8-2b0c-4a10-bfb4-af3e60a9744d", 00:13:49.217 "strip_size_kb": 0, 00:13:49.217 "state": "online", 00:13:49.217 "raid_level": "raid1", 00:13:49.217 "superblock": false, 00:13:49.217 "num_base_bdevs": 4, 00:13:49.217 "num_base_bdevs_discovered": 4, 00:13:49.217 "num_base_bdevs_operational": 4, 00:13:49.217 "base_bdevs_list": [ 00:13:49.217 { 00:13:49.218 "name": "NewBaseBdev", 00:13:49.218 "uuid": "aad774ba-6d2b-48ec-9d76-14e02fe49643", 00:13:49.218 "is_configured": true, 00:13:49.218 "data_offset": 0, 00:13:49.218 "data_size": 65536 00:13:49.218 }, 00:13:49.218 { 00:13:49.218 "name": "BaseBdev2", 00:13:49.218 "uuid": "789d7f14-7dfa-4529-a12c-2ca2d9ac58c1", 00:13:49.218 "is_configured": true, 00:13:49.218 "data_offset": 0, 00:13:49.218 "data_size": 65536 00:13:49.218 }, 00:13:49.218 { 00:13:49.218 "name": "BaseBdev3", 00:13:49.218 "uuid": "21362af0-0a02-44d8-b649-679bb42daa85", 00:13:49.218 "is_configured": true, 00:13:49.218 "data_offset": 0, 00:13:49.218 "data_size": 65536 00:13:49.218 }, 00:13:49.218 { 00:13:49.218 "name": "BaseBdev4", 00:13:49.218 "uuid": "6caa6546-4312-4d6a-8bbd-734ceccd15ec", 00:13:49.218 "is_configured": true, 00:13:49.218 "data_offset": 0, 00:13:49.218 "data_size": 65536 00:13:49.218 } 00:13:49.218 ] 00:13:49.218 }' 00:13:49.218 20:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.218 20:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.783 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:49.783 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:49.783 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:49.783 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:49.783 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:49.783 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:49.783 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:49.783 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.783 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.783 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:49.783 [2024-10-17 20:10:35.297170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.783 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.783 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:49.784 "name": "Existed_Raid", 00:13:49.784 "aliases": [ 00:13:49.784 "3999ccf8-2b0c-4a10-bfb4-af3e60a9744d" 00:13:49.784 ], 00:13:49.784 "product_name": "Raid Volume", 00:13:49.784 "block_size": 512, 00:13:49.784 "num_blocks": 65536, 00:13:49.784 "uuid": "3999ccf8-2b0c-4a10-bfb4-af3e60a9744d", 00:13:49.784 "assigned_rate_limits": { 00:13:49.784 "rw_ios_per_sec": 0, 00:13:49.784 "rw_mbytes_per_sec": 0, 00:13:49.784 "r_mbytes_per_sec": 0, 00:13:49.784 "w_mbytes_per_sec": 0 00:13:49.784 }, 00:13:49.784 "claimed": false, 00:13:49.784 "zoned": false, 00:13:49.784 "supported_io_types": { 00:13:49.784 "read": true, 00:13:49.784 "write": true, 00:13:49.784 "unmap": false, 00:13:49.784 "flush": false, 00:13:49.784 "reset": true, 00:13:49.784 "nvme_admin": false, 00:13:49.784 "nvme_io": false, 00:13:49.784 "nvme_io_md": false, 00:13:49.784 "write_zeroes": true, 00:13:49.784 "zcopy": false, 00:13:49.784 "get_zone_info": false, 00:13:49.784 "zone_management": false, 00:13:49.784 "zone_append": false, 00:13:49.784 "compare": false, 00:13:49.784 "compare_and_write": false, 00:13:49.784 "abort": false, 00:13:49.784 "seek_hole": false, 00:13:49.784 "seek_data": false, 00:13:49.784 "copy": false, 00:13:49.784 "nvme_iov_md": false 00:13:49.784 }, 00:13:49.784 "memory_domains": [ 00:13:49.784 { 00:13:49.784 "dma_device_id": "system", 00:13:49.784 "dma_device_type": 1 00:13:49.784 }, 00:13:49.784 { 00:13:49.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.784 "dma_device_type": 2 00:13:49.784 }, 00:13:49.784 { 00:13:49.784 "dma_device_id": "system", 00:13:49.784 "dma_device_type": 1 00:13:49.784 }, 00:13:49.784 { 00:13:49.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.784 "dma_device_type": 2 00:13:49.784 }, 00:13:49.784 { 00:13:49.784 "dma_device_id": "system", 00:13:49.784 "dma_device_type": 1 00:13:49.784 }, 00:13:49.784 { 00:13:49.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.784 "dma_device_type": 2 00:13:49.784 }, 00:13:49.784 { 00:13:49.784 "dma_device_id": "system", 00:13:49.784 "dma_device_type": 1 00:13:49.784 }, 00:13:49.784 { 00:13:49.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.784 "dma_device_type": 2 00:13:49.784 } 00:13:49.784 ], 00:13:49.784 "driver_specific": { 00:13:49.784 "raid": { 00:13:49.784 "uuid": "3999ccf8-2b0c-4a10-bfb4-af3e60a9744d", 00:13:49.784 "strip_size_kb": 0, 00:13:49.784 "state": "online", 00:13:49.784 "raid_level": "raid1", 00:13:49.784 "superblock": false, 00:13:49.784 "num_base_bdevs": 4, 00:13:49.784 "num_base_bdevs_discovered": 4, 00:13:49.784 "num_base_bdevs_operational": 4, 00:13:49.784 "base_bdevs_list": [ 00:13:49.784 { 00:13:49.784 "name": "NewBaseBdev", 00:13:49.784 "uuid": "aad774ba-6d2b-48ec-9d76-14e02fe49643", 00:13:49.784 "is_configured": true, 00:13:49.784 "data_offset": 0, 00:13:49.784 "data_size": 65536 00:13:49.784 }, 00:13:49.784 { 00:13:49.784 "name": "BaseBdev2", 00:13:49.784 "uuid": "789d7f14-7dfa-4529-a12c-2ca2d9ac58c1", 00:13:49.784 "is_configured": true, 00:13:49.784 "data_offset": 0, 00:13:49.784 "data_size": 65536 00:13:49.784 }, 00:13:49.784 { 00:13:49.784 "name": "BaseBdev3", 00:13:49.784 "uuid": "21362af0-0a02-44d8-b649-679bb42daa85", 00:13:49.784 "is_configured": true, 00:13:49.784 "data_offset": 0, 00:13:49.784 "data_size": 65536 00:13:49.784 }, 00:13:49.784 { 00:13:49.784 "name": "BaseBdev4", 00:13:49.784 "uuid": "6caa6546-4312-4d6a-8bbd-734ceccd15ec", 00:13:49.784 "is_configured": true, 00:13:49.784 "data_offset": 0, 00:13:49.784 "data_size": 65536 00:13:49.784 } 00:13:49.784 ] 00:13:49.784 } 00:13:49.784 } 00:13:49.784 }' 00:13:49.784 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.784 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:49.784 BaseBdev2 00:13:49.784 BaseBdev3 00:13:49.784 BaseBdev4' 00:13:49.784 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.042 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:50.042 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.042 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:50.042 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.042 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.042 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.042 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.042 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.042 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.043 [2024-10-17 20:10:35.660857] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.043 [2024-10-17 20:10:35.660899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.043 [2024-10-17 20:10:35.661040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.043 [2024-10-17 20:10:35.661406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.043 [2024-10-17 20:10:35.661437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73230 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73230 ']' 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73230 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73230 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:50.043 killing process with pid 73230 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:50.043 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73230' 00:13:50.302 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73230 00:13:50.302 [2024-10-17 20:10:35.694445] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.302 20:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73230 00:13:50.560 [2024-10-17 20:10:36.049869] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:51.496 00:13:51.496 real 0m12.728s 00:13:51.496 user 0m21.125s 00:13:51.496 sys 0m1.813s 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.496 ************************************ 00:13:51.496 END TEST raid_state_function_test 00:13:51.496 ************************************ 00:13:51.496 20:10:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:51.496 20:10:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:51.496 20:10:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.496 20:10:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.496 ************************************ 00:13:51.496 START TEST raid_state_function_test_sb 00:13:51.496 ************************************ 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73907 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:51.496 Process raid pid: 73907 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73907' 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73907 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73907 ']' 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:51.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:51.496 20:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.756 [2024-10-17 20:10:37.258835] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:13:51.756 [2024-10-17 20:10:37.259021] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.015 [2024-10-17 20:10:37.432167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.015 [2024-10-17 20:10:37.562075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.274 [2024-10-17 20:10:37.770709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.274 [2024-10-17 20:10:37.770765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.845 [2024-10-17 20:10:38.194150] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.845 [2024-10-17 20:10:38.194210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.845 [2024-10-17 20:10:38.194226] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.845 [2024-10-17 20:10:38.194242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.845 [2024-10-17 20:10:38.194253] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:52.845 [2024-10-17 20:10:38.194267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:52.845 [2024-10-17 20:10:38.194277] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:52.845 [2024-10-17 20:10:38.194291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.845 "name": "Existed_Raid", 00:13:52.845 "uuid": "810ccbbc-1ad7-44b6-b016-14e4985cacf4", 00:13:52.845 "strip_size_kb": 0, 00:13:52.845 "state": "configuring", 00:13:52.845 "raid_level": "raid1", 00:13:52.845 "superblock": true, 00:13:52.845 "num_base_bdevs": 4, 00:13:52.845 "num_base_bdevs_discovered": 0, 00:13:52.845 "num_base_bdevs_operational": 4, 00:13:52.845 "base_bdevs_list": [ 00:13:52.845 { 00:13:52.845 "name": "BaseBdev1", 00:13:52.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.845 "is_configured": false, 00:13:52.845 "data_offset": 0, 00:13:52.845 "data_size": 0 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "name": "BaseBdev2", 00:13:52.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.845 "is_configured": false, 00:13:52.845 "data_offset": 0, 00:13:52.845 "data_size": 0 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "name": "BaseBdev3", 00:13:52.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.845 "is_configured": false, 00:13:52.845 "data_offset": 0, 00:13:52.845 "data_size": 0 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "name": "BaseBdev4", 00:13:52.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.845 "is_configured": false, 00:13:52.845 "data_offset": 0, 00:13:52.845 "data_size": 0 00:13:52.845 } 00:13:52.845 ] 00:13:52.845 }' 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.845 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.104 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:53.104 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.104 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.104 [2024-10-17 20:10:38.742205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.104 [2024-10-17 20:10:38.742257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:53.104 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.104 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:53.104 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.104 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.104 [2024-10-17 20:10:38.750225] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:53.104 [2024-10-17 20:10:38.750274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:53.104 [2024-10-17 20:10:38.750288] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.104 [2024-10-17 20:10:38.750304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.104 [2024-10-17 20:10:38.750314] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:53.104 [2024-10-17 20:10:38.750329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:53.104 [2024-10-17 20:10:38.750338] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:53.104 [2024-10-17 20:10:38.750352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:53.104 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.104 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:53.104 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.104 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.364 [2024-10-17 20:10:38.796008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.364 BaseBdev1 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.364 [ 00:13:53.364 { 00:13:53.364 "name": "BaseBdev1", 00:13:53.364 "aliases": [ 00:13:53.364 "d8814c8c-3cce-47c1-a725-4a16166ecced" 00:13:53.364 ], 00:13:53.364 "product_name": "Malloc disk", 00:13:53.364 "block_size": 512, 00:13:53.364 "num_blocks": 65536, 00:13:53.364 "uuid": "d8814c8c-3cce-47c1-a725-4a16166ecced", 00:13:53.364 "assigned_rate_limits": { 00:13:53.364 "rw_ios_per_sec": 0, 00:13:53.364 "rw_mbytes_per_sec": 0, 00:13:53.364 "r_mbytes_per_sec": 0, 00:13:53.364 "w_mbytes_per_sec": 0 00:13:53.364 }, 00:13:53.364 "claimed": true, 00:13:53.364 "claim_type": "exclusive_write", 00:13:53.364 "zoned": false, 00:13:53.364 "supported_io_types": { 00:13:53.364 "read": true, 00:13:53.364 "write": true, 00:13:53.364 "unmap": true, 00:13:53.364 "flush": true, 00:13:53.364 "reset": true, 00:13:53.364 "nvme_admin": false, 00:13:53.364 "nvme_io": false, 00:13:53.364 "nvme_io_md": false, 00:13:53.364 "write_zeroes": true, 00:13:53.364 "zcopy": true, 00:13:53.364 "get_zone_info": false, 00:13:53.364 "zone_management": false, 00:13:53.364 "zone_append": false, 00:13:53.364 "compare": false, 00:13:53.364 "compare_and_write": false, 00:13:53.364 "abort": true, 00:13:53.364 "seek_hole": false, 00:13:53.364 "seek_data": false, 00:13:53.364 "copy": true, 00:13:53.364 "nvme_iov_md": false 00:13:53.364 }, 00:13:53.364 "memory_domains": [ 00:13:53.364 { 00:13:53.364 "dma_device_id": "system", 00:13:53.364 "dma_device_type": 1 00:13:53.364 }, 00:13:53.364 { 00:13:53.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.364 "dma_device_type": 2 00:13:53.364 } 00:13:53.364 ], 00:13:53.364 "driver_specific": {} 00:13:53.364 } 00:13:53.364 ] 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.364 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.364 "name": "Existed_Raid", 00:13:53.364 "uuid": "971e1aef-3f80-46bc-8bd8-cac4190f78e6", 00:13:53.364 "strip_size_kb": 0, 00:13:53.364 "state": "configuring", 00:13:53.364 "raid_level": "raid1", 00:13:53.364 "superblock": true, 00:13:53.364 "num_base_bdevs": 4, 00:13:53.364 "num_base_bdevs_discovered": 1, 00:13:53.364 "num_base_bdevs_operational": 4, 00:13:53.364 "base_bdevs_list": [ 00:13:53.364 { 00:13:53.364 "name": "BaseBdev1", 00:13:53.364 "uuid": "d8814c8c-3cce-47c1-a725-4a16166ecced", 00:13:53.364 "is_configured": true, 00:13:53.364 "data_offset": 2048, 00:13:53.364 "data_size": 63488 00:13:53.364 }, 00:13:53.364 { 00:13:53.364 "name": "BaseBdev2", 00:13:53.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.364 "is_configured": false, 00:13:53.364 "data_offset": 0, 00:13:53.364 "data_size": 0 00:13:53.364 }, 00:13:53.364 { 00:13:53.364 "name": "BaseBdev3", 00:13:53.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.364 "is_configured": false, 00:13:53.364 "data_offset": 0, 00:13:53.365 "data_size": 0 00:13:53.365 }, 00:13:53.365 { 00:13:53.365 "name": "BaseBdev4", 00:13:53.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.365 "is_configured": false, 00:13:53.365 "data_offset": 0, 00:13:53.365 "data_size": 0 00:13:53.365 } 00:13:53.365 ] 00:13:53.365 }' 00:13:53.365 20:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.365 20:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.932 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:53.932 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.932 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.932 [2024-10-17 20:10:39.344189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.932 [2024-10-17 20:10:39.344260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:53.932 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.932 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:53.932 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.932 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.932 [2024-10-17 20:10:39.352268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.932 [2024-10-17 20:10:39.354733] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.932 [2024-10-17 20:10:39.354782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.932 [2024-10-17 20:10:39.354797] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:53.932 [2024-10-17 20:10:39.354815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:53.932 [2024-10-17 20:10:39.354826] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:53.932 [2024-10-17 20:10:39.354840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:53.932 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.933 "name": "Existed_Raid", 00:13:53.933 "uuid": "e1c43a29-63a9-41ba-9480-bd33c88b94dd", 00:13:53.933 "strip_size_kb": 0, 00:13:53.933 "state": "configuring", 00:13:53.933 "raid_level": "raid1", 00:13:53.933 "superblock": true, 00:13:53.933 "num_base_bdevs": 4, 00:13:53.933 "num_base_bdevs_discovered": 1, 00:13:53.933 "num_base_bdevs_operational": 4, 00:13:53.933 "base_bdevs_list": [ 00:13:53.933 { 00:13:53.933 "name": "BaseBdev1", 00:13:53.933 "uuid": "d8814c8c-3cce-47c1-a725-4a16166ecced", 00:13:53.933 "is_configured": true, 00:13:53.933 "data_offset": 2048, 00:13:53.933 "data_size": 63488 00:13:53.933 }, 00:13:53.933 { 00:13:53.933 "name": "BaseBdev2", 00:13:53.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.933 "is_configured": false, 00:13:53.933 "data_offset": 0, 00:13:53.933 "data_size": 0 00:13:53.933 }, 00:13:53.933 { 00:13:53.933 "name": "BaseBdev3", 00:13:53.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.933 "is_configured": false, 00:13:53.933 "data_offset": 0, 00:13:53.933 "data_size": 0 00:13:53.933 }, 00:13:53.933 { 00:13:53.933 "name": "BaseBdev4", 00:13:53.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.933 "is_configured": false, 00:13:53.933 "data_offset": 0, 00:13:53.933 "data_size": 0 00:13:53.933 } 00:13:53.933 ] 00:13:53.933 }' 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.933 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.500 [2024-10-17 20:10:39.882592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.500 BaseBdev2 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.500 [ 00:13:54.500 { 00:13:54.500 "name": "BaseBdev2", 00:13:54.500 "aliases": [ 00:13:54.500 "32cd3989-b0c3-49e0-841d-da6038df8af6" 00:13:54.500 ], 00:13:54.500 "product_name": "Malloc disk", 00:13:54.500 "block_size": 512, 00:13:54.500 "num_blocks": 65536, 00:13:54.500 "uuid": "32cd3989-b0c3-49e0-841d-da6038df8af6", 00:13:54.500 "assigned_rate_limits": { 00:13:54.500 "rw_ios_per_sec": 0, 00:13:54.500 "rw_mbytes_per_sec": 0, 00:13:54.500 "r_mbytes_per_sec": 0, 00:13:54.500 "w_mbytes_per_sec": 0 00:13:54.500 }, 00:13:54.500 "claimed": true, 00:13:54.500 "claim_type": "exclusive_write", 00:13:54.500 "zoned": false, 00:13:54.500 "supported_io_types": { 00:13:54.500 "read": true, 00:13:54.500 "write": true, 00:13:54.500 "unmap": true, 00:13:54.500 "flush": true, 00:13:54.500 "reset": true, 00:13:54.500 "nvme_admin": false, 00:13:54.500 "nvme_io": false, 00:13:54.500 "nvme_io_md": false, 00:13:54.500 "write_zeroes": true, 00:13:54.500 "zcopy": true, 00:13:54.500 "get_zone_info": false, 00:13:54.500 "zone_management": false, 00:13:54.500 "zone_append": false, 00:13:54.500 "compare": false, 00:13:54.500 "compare_and_write": false, 00:13:54.500 "abort": true, 00:13:54.500 "seek_hole": false, 00:13:54.500 "seek_data": false, 00:13:54.500 "copy": true, 00:13:54.500 "nvme_iov_md": false 00:13:54.500 }, 00:13:54.500 "memory_domains": [ 00:13:54.500 { 00:13:54.500 "dma_device_id": "system", 00:13:54.500 "dma_device_type": 1 00:13:54.500 }, 00:13:54.500 { 00:13:54.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.500 "dma_device_type": 2 00:13:54.500 } 00:13:54.500 ], 00:13:54.500 "driver_specific": {} 00:13:54.500 } 00:13:54.500 ] 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.500 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.500 "name": "Existed_Raid", 00:13:54.501 "uuid": "e1c43a29-63a9-41ba-9480-bd33c88b94dd", 00:13:54.501 "strip_size_kb": 0, 00:13:54.501 "state": "configuring", 00:13:54.501 "raid_level": "raid1", 00:13:54.501 "superblock": true, 00:13:54.501 "num_base_bdevs": 4, 00:13:54.501 "num_base_bdevs_discovered": 2, 00:13:54.501 "num_base_bdevs_operational": 4, 00:13:54.501 "base_bdevs_list": [ 00:13:54.501 { 00:13:54.501 "name": "BaseBdev1", 00:13:54.501 "uuid": "d8814c8c-3cce-47c1-a725-4a16166ecced", 00:13:54.501 "is_configured": true, 00:13:54.501 "data_offset": 2048, 00:13:54.501 "data_size": 63488 00:13:54.501 }, 00:13:54.501 { 00:13:54.501 "name": "BaseBdev2", 00:13:54.501 "uuid": "32cd3989-b0c3-49e0-841d-da6038df8af6", 00:13:54.501 "is_configured": true, 00:13:54.501 "data_offset": 2048, 00:13:54.501 "data_size": 63488 00:13:54.501 }, 00:13:54.501 { 00:13:54.501 "name": "BaseBdev3", 00:13:54.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.501 "is_configured": false, 00:13:54.501 "data_offset": 0, 00:13:54.501 "data_size": 0 00:13:54.501 }, 00:13:54.501 { 00:13:54.501 "name": "BaseBdev4", 00:13:54.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.501 "is_configured": false, 00:13:54.501 "data_offset": 0, 00:13:54.501 "data_size": 0 00:13:54.501 } 00:13:54.501 ] 00:13:54.501 }' 00:13:54.501 20:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.501 20:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.067 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:55.067 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.067 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.067 [2024-10-17 20:10:40.474203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.067 BaseBdev3 00:13:55.067 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.067 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:55.067 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:55.067 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.067 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:55.067 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.068 [ 00:13:55.068 { 00:13:55.068 "name": "BaseBdev3", 00:13:55.068 "aliases": [ 00:13:55.068 "8fa7f64e-6a15-404e-83ca-d5963f866773" 00:13:55.068 ], 00:13:55.068 "product_name": "Malloc disk", 00:13:55.068 "block_size": 512, 00:13:55.068 "num_blocks": 65536, 00:13:55.068 "uuid": "8fa7f64e-6a15-404e-83ca-d5963f866773", 00:13:55.068 "assigned_rate_limits": { 00:13:55.068 "rw_ios_per_sec": 0, 00:13:55.068 "rw_mbytes_per_sec": 0, 00:13:55.068 "r_mbytes_per_sec": 0, 00:13:55.068 "w_mbytes_per_sec": 0 00:13:55.068 }, 00:13:55.068 "claimed": true, 00:13:55.068 "claim_type": "exclusive_write", 00:13:55.068 "zoned": false, 00:13:55.068 "supported_io_types": { 00:13:55.068 "read": true, 00:13:55.068 "write": true, 00:13:55.068 "unmap": true, 00:13:55.068 "flush": true, 00:13:55.068 "reset": true, 00:13:55.068 "nvme_admin": false, 00:13:55.068 "nvme_io": false, 00:13:55.068 "nvme_io_md": false, 00:13:55.068 "write_zeroes": true, 00:13:55.068 "zcopy": true, 00:13:55.068 "get_zone_info": false, 00:13:55.068 "zone_management": false, 00:13:55.068 "zone_append": false, 00:13:55.068 "compare": false, 00:13:55.068 "compare_and_write": false, 00:13:55.068 "abort": true, 00:13:55.068 "seek_hole": false, 00:13:55.068 "seek_data": false, 00:13:55.068 "copy": true, 00:13:55.068 "nvme_iov_md": false 00:13:55.068 }, 00:13:55.068 "memory_domains": [ 00:13:55.068 { 00:13:55.068 "dma_device_id": "system", 00:13:55.068 "dma_device_type": 1 00:13:55.068 }, 00:13:55.068 { 00:13:55.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.068 "dma_device_type": 2 00:13:55.068 } 00:13:55.068 ], 00:13:55.068 "driver_specific": {} 00:13:55.068 } 00:13:55.068 ] 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.068 "name": "Existed_Raid", 00:13:55.068 "uuid": "e1c43a29-63a9-41ba-9480-bd33c88b94dd", 00:13:55.068 "strip_size_kb": 0, 00:13:55.068 "state": "configuring", 00:13:55.068 "raid_level": "raid1", 00:13:55.068 "superblock": true, 00:13:55.068 "num_base_bdevs": 4, 00:13:55.068 "num_base_bdevs_discovered": 3, 00:13:55.068 "num_base_bdevs_operational": 4, 00:13:55.068 "base_bdevs_list": [ 00:13:55.068 { 00:13:55.068 "name": "BaseBdev1", 00:13:55.068 "uuid": "d8814c8c-3cce-47c1-a725-4a16166ecced", 00:13:55.068 "is_configured": true, 00:13:55.068 "data_offset": 2048, 00:13:55.068 "data_size": 63488 00:13:55.068 }, 00:13:55.068 { 00:13:55.068 "name": "BaseBdev2", 00:13:55.068 "uuid": "32cd3989-b0c3-49e0-841d-da6038df8af6", 00:13:55.068 "is_configured": true, 00:13:55.068 "data_offset": 2048, 00:13:55.068 "data_size": 63488 00:13:55.068 }, 00:13:55.068 { 00:13:55.068 "name": "BaseBdev3", 00:13:55.068 "uuid": "8fa7f64e-6a15-404e-83ca-d5963f866773", 00:13:55.068 "is_configured": true, 00:13:55.068 "data_offset": 2048, 00:13:55.068 "data_size": 63488 00:13:55.068 }, 00:13:55.068 { 00:13:55.068 "name": "BaseBdev4", 00:13:55.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.068 "is_configured": false, 00:13:55.068 "data_offset": 0, 00:13:55.068 "data_size": 0 00:13:55.068 } 00:13:55.068 ] 00:13:55.068 }' 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.068 20:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.636 [2024-10-17 20:10:41.044817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:55.636 [2024-10-17 20:10:41.045218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:55.636 [2024-10-17 20:10:41.045275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:55.636 BaseBdev4 00:13:55.636 [2024-10-17 20:10:41.045661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:55.636 [2024-10-17 20:10:41.045937] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:55.636 [2024-10-17 20:10:41.045966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:55.636 [2024-10-17 20:10:41.046172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.636 [ 00:13:55.636 { 00:13:55.636 "name": "BaseBdev4", 00:13:55.636 "aliases": [ 00:13:55.636 "ad01abae-7127-44c9-aaae-c62235e2abb2" 00:13:55.636 ], 00:13:55.636 "product_name": "Malloc disk", 00:13:55.636 "block_size": 512, 00:13:55.636 "num_blocks": 65536, 00:13:55.636 "uuid": "ad01abae-7127-44c9-aaae-c62235e2abb2", 00:13:55.636 "assigned_rate_limits": { 00:13:55.636 "rw_ios_per_sec": 0, 00:13:55.636 "rw_mbytes_per_sec": 0, 00:13:55.636 "r_mbytes_per_sec": 0, 00:13:55.636 "w_mbytes_per_sec": 0 00:13:55.636 }, 00:13:55.636 "claimed": true, 00:13:55.636 "claim_type": "exclusive_write", 00:13:55.636 "zoned": false, 00:13:55.636 "supported_io_types": { 00:13:55.636 "read": true, 00:13:55.636 "write": true, 00:13:55.636 "unmap": true, 00:13:55.636 "flush": true, 00:13:55.636 "reset": true, 00:13:55.636 "nvme_admin": false, 00:13:55.636 "nvme_io": false, 00:13:55.636 "nvme_io_md": false, 00:13:55.636 "write_zeroes": true, 00:13:55.636 "zcopy": true, 00:13:55.636 "get_zone_info": false, 00:13:55.636 "zone_management": false, 00:13:55.636 "zone_append": false, 00:13:55.636 "compare": false, 00:13:55.636 "compare_and_write": false, 00:13:55.636 "abort": true, 00:13:55.636 "seek_hole": false, 00:13:55.636 "seek_data": false, 00:13:55.636 "copy": true, 00:13:55.636 "nvme_iov_md": false 00:13:55.636 }, 00:13:55.636 "memory_domains": [ 00:13:55.636 { 00:13:55.636 "dma_device_id": "system", 00:13:55.636 "dma_device_type": 1 00:13:55.636 }, 00:13:55.636 { 00:13:55.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.636 "dma_device_type": 2 00:13:55.636 } 00:13:55.636 ], 00:13:55.636 "driver_specific": {} 00:13:55.636 } 00:13:55.636 ] 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.636 "name": "Existed_Raid", 00:13:55.636 "uuid": "e1c43a29-63a9-41ba-9480-bd33c88b94dd", 00:13:55.636 "strip_size_kb": 0, 00:13:55.636 "state": "online", 00:13:55.636 "raid_level": "raid1", 00:13:55.636 "superblock": true, 00:13:55.636 "num_base_bdevs": 4, 00:13:55.636 "num_base_bdevs_discovered": 4, 00:13:55.636 "num_base_bdevs_operational": 4, 00:13:55.636 "base_bdevs_list": [ 00:13:55.636 { 00:13:55.636 "name": "BaseBdev1", 00:13:55.636 "uuid": "d8814c8c-3cce-47c1-a725-4a16166ecced", 00:13:55.636 "is_configured": true, 00:13:55.636 "data_offset": 2048, 00:13:55.636 "data_size": 63488 00:13:55.636 }, 00:13:55.636 { 00:13:55.636 "name": "BaseBdev2", 00:13:55.636 "uuid": "32cd3989-b0c3-49e0-841d-da6038df8af6", 00:13:55.636 "is_configured": true, 00:13:55.636 "data_offset": 2048, 00:13:55.636 "data_size": 63488 00:13:55.636 }, 00:13:55.636 { 00:13:55.636 "name": "BaseBdev3", 00:13:55.636 "uuid": "8fa7f64e-6a15-404e-83ca-d5963f866773", 00:13:55.636 "is_configured": true, 00:13:55.636 "data_offset": 2048, 00:13:55.636 "data_size": 63488 00:13:55.636 }, 00:13:55.636 { 00:13:55.636 "name": "BaseBdev4", 00:13:55.636 "uuid": "ad01abae-7127-44c9-aaae-c62235e2abb2", 00:13:55.636 "is_configured": true, 00:13:55.636 "data_offset": 2048, 00:13:55.636 "data_size": 63488 00:13:55.636 } 00:13:55.636 ] 00:13:55.636 }' 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.636 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.203 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:56.203 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:56.203 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:56.203 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:56.203 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:56.203 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:56.203 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:56.203 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:56.203 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.203 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.203 [2024-10-17 20:10:41.601449] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.203 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.203 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:56.203 "name": "Existed_Raid", 00:13:56.203 "aliases": [ 00:13:56.203 "e1c43a29-63a9-41ba-9480-bd33c88b94dd" 00:13:56.203 ], 00:13:56.203 "product_name": "Raid Volume", 00:13:56.203 "block_size": 512, 00:13:56.203 "num_blocks": 63488, 00:13:56.203 "uuid": "e1c43a29-63a9-41ba-9480-bd33c88b94dd", 00:13:56.203 "assigned_rate_limits": { 00:13:56.203 "rw_ios_per_sec": 0, 00:13:56.203 "rw_mbytes_per_sec": 0, 00:13:56.203 "r_mbytes_per_sec": 0, 00:13:56.203 "w_mbytes_per_sec": 0 00:13:56.203 }, 00:13:56.203 "claimed": false, 00:13:56.203 "zoned": false, 00:13:56.203 "supported_io_types": { 00:13:56.203 "read": true, 00:13:56.203 "write": true, 00:13:56.203 "unmap": false, 00:13:56.203 "flush": false, 00:13:56.203 "reset": true, 00:13:56.203 "nvme_admin": false, 00:13:56.203 "nvme_io": false, 00:13:56.203 "nvme_io_md": false, 00:13:56.203 "write_zeroes": true, 00:13:56.203 "zcopy": false, 00:13:56.203 "get_zone_info": false, 00:13:56.203 "zone_management": false, 00:13:56.203 "zone_append": false, 00:13:56.203 "compare": false, 00:13:56.203 "compare_and_write": false, 00:13:56.203 "abort": false, 00:13:56.203 "seek_hole": false, 00:13:56.203 "seek_data": false, 00:13:56.203 "copy": false, 00:13:56.203 "nvme_iov_md": false 00:13:56.203 }, 00:13:56.203 "memory_domains": [ 00:13:56.203 { 00:13:56.203 "dma_device_id": "system", 00:13:56.203 "dma_device_type": 1 00:13:56.203 }, 00:13:56.203 { 00:13:56.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.204 "dma_device_type": 2 00:13:56.204 }, 00:13:56.204 { 00:13:56.204 "dma_device_id": "system", 00:13:56.204 "dma_device_type": 1 00:13:56.204 }, 00:13:56.204 { 00:13:56.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.204 "dma_device_type": 2 00:13:56.204 }, 00:13:56.204 { 00:13:56.204 "dma_device_id": "system", 00:13:56.204 "dma_device_type": 1 00:13:56.204 }, 00:13:56.204 { 00:13:56.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.204 "dma_device_type": 2 00:13:56.204 }, 00:13:56.204 { 00:13:56.204 "dma_device_id": "system", 00:13:56.204 "dma_device_type": 1 00:13:56.204 }, 00:13:56.204 { 00:13:56.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.204 "dma_device_type": 2 00:13:56.204 } 00:13:56.204 ], 00:13:56.204 "driver_specific": { 00:13:56.204 "raid": { 00:13:56.204 "uuid": "e1c43a29-63a9-41ba-9480-bd33c88b94dd", 00:13:56.204 "strip_size_kb": 0, 00:13:56.204 "state": "online", 00:13:56.204 "raid_level": "raid1", 00:13:56.204 "superblock": true, 00:13:56.204 "num_base_bdevs": 4, 00:13:56.204 "num_base_bdevs_discovered": 4, 00:13:56.204 "num_base_bdevs_operational": 4, 00:13:56.204 "base_bdevs_list": [ 00:13:56.204 { 00:13:56.204 "name": "BaseBdev1", 00:13:56.204 "uuid": "d8814c8c-3cce-47c1-a725-4a16166ecced", 00:13:56.204 "is_configured": true, 00:13:56.204 "data_offset": 2048, 00:13:56.204 "data_size": 63488 00:13:56.204 }, 00:13:56.204 { 00:13:56.204 "name": "BaseBdev2", 00:13:56.204 "uuid": "32cd3989-b0c3-49e0-841d-da6038df8af6", 00:13:56.204 "is_configured": true, 00:13:56.204 "data_offset": 2048, 00:13:56.204 "data_size": 63488 00:13:56.204 }, 00:13:56.204 { 00:13:56.204 "name": "BaseBdev3", 00:13:56.204 "uuid": "8fa7f64e-6a15-404e-83ca-d5963f866773", 00:13:56.204 "is_configured": true, 00:13:56.204 "data_offset": 2048, 00:13:56.204 "data_size": 63488 00:13:56.204 }, 00:13:56.204 { 00:13:56.204 "name": "BaseBdev4", 00:13:56.204 "uuid": "ad01abae-7127-44c9-aaae-c62235e2abb2", 00:13:56.204 "is_configured": true, 00:13:56.204 "data_offset": 2048, 00:13:56.204 "data_size": 63488 00:13:56.204 } 00:13:56.204 ] 00:13:56.204 } 00:13:56.204 } 00:13:56.204 }' 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:56.204 BaseBdev2 00:13:56.204 BaseBdev3 00:13:56.204 BaseBdev4' 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.204 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.464 20:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.464 [2024-10-17 20:10:41.973230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.464 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.464 "name": "Existed_Raid", 00:13:56.464 "uuid": "e1c43a29-63a9-41ba-9480-bd33c88b94dd", 00:13:56.464 "strip_size_kb": 0, 00:13:56.464 "state": "online", 00:13:56.464 "raid_level": "raid1", 00:13:56.464 "superblock": true, 00:13:56.464 "num_base_bdevs": 4, 00:13:56.464 "num_base_bdevs_discovered": 3, 00:13:56.464 "num_base_bdevs_operational": 3, 00:13:56.464 "base_bdevs_list": [ 00:13:56.465 { 00:13:56.465 "name": null, 00:13:56.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.465 "is_configured": false, 00:13:56.465 "data_offset": 0, 00:13:56.465 "data_size": 63488 00:13:56.465 }, 00:13:56.465 { 00:13:56.465 "name": "BaseBdev2", 00:13:56.465 "uuid": "32cd3989-b0c3-49e0-841d-da6038df8af6", 00:13:56.465 "is_configured": true, 00:13:56.465 "data_offset": 2048, 00:13:56.465 "data_size": 63488 00:13:56.465 }, 00:13:56.465 { 00:13:56.465 "name": "BaseBdev3", 00:13:56.465 "uuid": "8fa7f64e-6a15-404e-83ca-d5963f866773", 00:13:56.465 "is_configured": true, 00:13:56.465 "data_offset": 2048, 00:13:56.465 "data_size": 63488 00:13:56.465 }, 00:13:56.465 { 00:13:56.465 "name": "BaseBdev4", 00:13:56.465 "uuid": "ad01abae-7127-44c9-aaae-c62235e2abb2", 00:13:56.465 "is_configured": true, 00:13:56.465 "data_offset": 2048, 00:13:56.465 "data_size": 63488 00:13:56.465 } 00:13:56.465 ] 00:13:56.465 }' 00:13:56.465 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.465 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.032 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:57.032 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:57.032 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.032 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.032 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.032 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:57.032 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.032 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:57.032 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:57.032 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:57.032 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.032 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.032 [2024-10-17 20:10:42.617978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:57.291 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.291 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:57.291 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:57.291 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.292 [2024-10-17 20:10:42.752310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.292 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.292 [2024-10-17 20:10:42.916753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:57.292 [2024-10-17 20:10:42.916870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.551 [2024-10-17 20:10:42.999775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.551 [2024-10-17 20:10:42.999865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.551 [2024-10-17 20:10:42.999883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:57.551 20:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.551 20:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.551 BaseBdev2 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.551 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.551 [ 00:13:57.551 { 00:13:57.551 "name": "BaseBdev2", 00:13:57.551 "aliases": [ 00:13:57.551 "f22f8018-6698-4e0f-844a-59b8786fca9b" 00:13:57.551 ], 00:13:57.551 "product_name": "Malloc disk", 00:13:57.551 "block_size": 512, 00:13:57.551 "num_blocks": 65536, 00:13:57.551 "uuid": "f22f8018-6698-4e0f-844a-59b8786fca9b", 00:13:57.551 "assigned_rate_limits": { 00:13:57.551 "rw_ios_per_sec": 0, 00:13:57.551 "rw_mbytes_per_sec": 0, 00:13:57.551 "r_mbytes_per_sec": 0, 00:13:57.551 "w_mbytes_per_sec": 0 00:13:57.551 }, 00:13:57.551 "claimed": false, 00:13:57.551 "zoned": false, 00:13:57.551 "supported_io_types": { 00:13:57.551 "read": true, 00:13:57.551 "write": true, 00:13:57.552 "unmap": true, 00:13:57.552 "flush": true, 00:13:57.552 "reset": true, 00:13:57.552 "nvme_admin": false, 00:13:57.552 "nvme_io": false, 00:13:57.552 "nvme_io_md": false, 00:13:57.552 "write_zeroes": true, 00:13:57.552 "zcopy": true, 00:13:57.552 "get_zone_info": false, 00:13:57.552 "zone_management": false, 00:13:57.552 "zone_append": false, 00:13:57.552 "compare": false, 00:13:57.552 "compare_and_write": false, 00:13:57.552 "abort": true, 00:13:57.552 "seek_hole": false, 00:13:57.552 "seek_data": false, 00:13:57.552 "copy": true, 00:13:57.552 "nvme_iov_md": false 00:13:57.552 }, 00:13:57.552 "memory_domains": [ 00:13:57.552 { 00:13:57.552 "dma_device_id": "system", 00:13:57.552 "dma_device_type": 1 00:13:57.552 }, 00:13:57.552 { 00:13:57.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.552 "dma_device_type": 2 00:13:57.552 } 00:13:57.552 ], 00:13:57.552 "driver_specific": {} 00:13:57.552 } 00:13:57.552 ] 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.552 BaseBdev3 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.552 [ 00:13:57.552 { 00:13:57.552 "name": "BaseBdev3", 00:13:57.552 "aliases": [ 00:13:57.552 "b8f00c1f-a5e6-410a-8b31-6cf7c9eaa595" 00:13:57.552 ], 00:13:57.552 "product_name": "Malloc disk", 00:13:57.552 "block_size": 512, 00:13:57.552 "num_blocks": 65536, 00:13:57.552 "uuid": "b8f00c1f-a5e6-410a-8b31-6cf7c9eaa595", 00:13:57.552 "assigned_rate_limits": { 00:13:57.552 "rw_ios_per_sec": 0, 00:13:57.552 "rw_mbytes_per_sec": 0, 00:13:57.552 "r_mbytes_per_sec": 0, 00:13:57.552 "w_mbytes_per_sec": 0 00:13:57.552 }, 00:13:57.552 "claimed": false, 00:13:57.552 "zoned": false, 00:13:57.552 "supported_io_types": { 00:13:57.552 "read": true, 00:13:57.552 "write": true, 00:13:57.552 "unmap": true, 00:13:57.552 "flush": true, 00:13:57.552 "reset": true, 00:13:57.552 "nvme_admin": false, 00:13:57.552 "nvme_io": false, 00:13:57.552 "nvme_io_md": false, 00:13:57.552 "write_zeroes": true, 00:13:57.552 "zcopy": true, 00:13:57.552 "get_zone_info": false, 00:13:57.552 "zone_management": false, 00:13:57.552 "zone_append": false, 00:13:57.552 "compare": false, 00:13:57.552 "compare_and_write": false, 00:13:57.552 "abort": true, 00:13:57.552 "seek_hole": false, 00:13:57.552 "seek_data": false, 00:13:57.552 "copy": true, 00:13:57.552 "nvme_iov_md": false 00:13:57.552 }, 00:13:57.552 "memory_domains": [ 00:13:57.552 { 00:13:57.552 "dma_device_id": "system", 00:13:57.552 "dma_device_type": 1 00:13:57.552 }, 00:13:57.552 { 00:13:57.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.552 "dma_device_type": 2 00:13:57.552 } 00:13:57.552 ], 00:13:57.552 "driver_specific": {} 00:13:57.552 } 00:13:57.552 ] 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.552 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.812 BaseBdev4 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.812 [ 00:13:57.812 { 00:13:57.812 "name": "BaseBdev4", 00:13:57.812 "aliases": [ 00:13:57.812 "54c62187-af32-478e-be94-d687b334b848" 00:13:57.812 ], 00:13:57.812 "product_name": "Malloc disk", 00:13:57.812 "block_size": 512, 00:13:57.812 "num_blocks": 65536, 00:13:57.812 "uuid": "54c62187-af32-478e-be94-d687b334b848", 00:13:57.812 "assigned_rate_limits": { 00:13:57.812 "rw_ios_per_sec": 0, 00:13:57.812 "rw_mbytes_per_sec": 0, 00:13:57.812 "r_mbytes_per_sec": 0, 00:13:57.812 "w_mbytes_per_sec": 0 00:13:57.812 }, 00:13:57.812 "claimed": false, 00:13:57.812 "zoned": false, 00:13:57.812 "supported_io_types": { 00:13:57.812 "read": true, 00:13:57.812 "write": true, 00:13:57.812 "unmap": true, 00:13:57.812 "flush": true, 00:13:57.812 "reset": true, 00:13:57.812 "nvme_admin": false, 00:13:57.812 "nvme_io": false, 00:13:57.812 "nvme_io_md": false, 00:13:57.812 "write_zeroes": true, 00:13:57.812 "zcopy": true, 00:13:57.812 "get_zone_info": false, 00:13:57.812 "zone_management": false, 00:13:57.812 "zone_append": false, 00:13:57.812 "compare": false, 00:13:57.812 "compare_and_write": false, 00:13:57.812 "abort": true, 00:13:57.812 "seek_hole": false, 00:13:57.812 "seek_data": false, 00:13:57.812 "copy": true, 00:13:57.812 "nvme_iov_md": false 00:13:57.812 }, 00:13:57.812 "memory_domains": [ 00:13:57.812 { 00:13:57.812 "dma_device_id": "system", 00:13:57.812 "dma_device_type": 1 00:13:57.812 }, 00:13:57.812 { 00:13:57.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.812 "dma_device_type": 2 00:13:57.812 } 00:13:57.812 ], 00:13:57.812 "driver_specific": {} 00:13:57.812 } 00:13:57.812 ] 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.812 [2024-10-17 20:10:43.277328] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:57.812 [2024-10-17 20:10:43.277442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:57.812 [2024-10-17 20:10:43.277468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.812 [2024-10-17 20:10:43.279875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.812 [2024-10-17 20:10:43.279953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.812 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.813 "name": "Existed_Raid", 00:13:57.813 "uuid": "aa6ab8ff-fc63-420b-9d7a-11eacd8a4f3f", 00:13:57.813 "strip_size_kb": 0, 00:13:57.813 "state": "configuring", 00:13:57.813 "raid_level": "raid1", 00:13:57.813 "superblock": true, 00:13:57.813 "num_base_bdevs": 4, 00:13:57.813 "num_base_bdevs_discovered": 3, 00:13:57.813 "num_base_bdevs_operational": 4, 00:13:57.813 "base_bdevs_list": [ 00:13:57.813 { 00:13:57.813 "name": "BaseBdev1", 00:13:57.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.813 "is_configured": false, 00:13:57.813 "data_offset": 0, 00:13:57.813 "data_size": 0 00:13:57.813 }, 00:13:57.813 { 00:13:57.813 "name": "BaseBdev2", 00:13:57.813 "uuid": "f22f8018-6698-4e0f-844a-59b8786fca9b", 00:13:57.813 "is_configured": true, 00:13:57.813 "data_offset": 2048, 00:13:57.813 "data_size": 63488 00:13:57.813 }, 00:13:57.813 { 00:13:57.813 "name": "BaseBdev3", 00:13:57.813 "uuid": "b8f00c1f-a5e6-410a-8b31-6cf7c9eaa595", 00:13:57.813 "is_configured": true, 00:13:57.813 "data_offset": 2048, 00:13:57.813 "data_size": 63488 00:13:57.813 }, 00:13:57.813 { 00:13:57.813 "name": "BaseBdev4", 00:13:57.813 "uuid": "54c62187-af32-478e-be94-d687b334b848", 00:13:57.813 "is_configured": true, 00:13:57.813 "data_offset": 2048, 00:13:57.813 "data_size": 63488 00:13:57.813 } 00:13:57.813 ] 00:13:57.813 }' 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.813 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.380 [2024-10-17 20:10:43.817546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.380 "name": "Existed_Raid", 00:13:58.380 "uuid": "aa6ab8ff-fc63-420b-9d7a-11eacd8a4f3f", 00:13:58.380 "strip_size_kb": 0, 00:13:58.380 "state": "configuring", 00:13:58.380 "raid_level": "raid1", 00:13:58.380 "superblock": true, 00:13:58.380 "num_base_bdevs": 4, 00:13:58.380 "num_base_bdevs_discovered": 2, 00:13:58.380 "num_base_bdevs_operational": 4, 00:13:58.380 "base_bdevs_list": [ 00:13:58.380 { 00:13:58.380 "name": "BaseBdev1", 00:13:58.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.380 "is_configured": false, 00:13:58.380 "data_offset": 0, 00:13:58.380 "data_size": 0 00:13:58.380 }, 00:13:58.380 { 00:13:58.380 "name": null, 00:13:58.380 "uuid": "f22f8018-6698-4e0f-844a-59b8786fca9b", 00:13:58.380 "is_configured": false, 00:13:58.380 "data_offset": 0, 00:13:58.380 "data_size": 63488 00:13:58.380 }, 00:13:58.380 { 00:13:58.380 "name": "BaseBdev3", 00:13:58.380 "uuid": "b8f00c1f-a5e6-410a-8b31-6cf7c9eaa595", 00:13:58.380 "is_configured": true, 00:13:58.380 "data_offset": 2048, 00:13:58.380 "data_size": 63488 00:13:58.380 }, 00:13:58.380 { 00:13:58.380 "name": "BaseBdev4", 00:13:58.380 "uuid": "54c62187-af32-478e-be94-d687b334b848", 00:13:58.380 "is_configured": true, 00:13:58.380 "data_offset": 2048, 00:13:58.380 "data_size": 63488 00:13:58.380 } 00:13:58.380 ] 00:13:58.380 }' 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.380 20:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.948 [2024-10-17 20:10:44.441884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.948 BaseBdev1 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.948 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.948 [ 00:13:58.948 { 00:13:58.948 "name": "BaseBdev1", 00:13:58.948 "aliases": [ 00:13:58.948 "46c502d7-529d-4450-9f8d-780150b6e8dd" 00:13:58.948 ], 00:13:58.948 "product_name": "Malloc disk", 00:13:58.948 "block_size": 512, 00:13:58.948 "num_blocks": 65536, 00:13:58.948 "uuid": "46c502d7-529d-4450-9f8d-780150b6e8dd", 00:13:58.948 "assigned_rate_limits": { 00:13:58.948 "rw_ios_per_sec": 0, 00:13:58.948 "rw_mbytes_per_sec": 0, 00:13:58.948 "r_mbytes_per_sec": 0, 00:13:58.948 "w_mbytes_per_sec": 0 00:13:58.948 }, 00:13:58.948 "claimed": true, 00:13:58.948 "claim_type": "exclusive_write", 00:13:58.949 "zoned": false, 00:13:58.949 "supported_io_types": { 00:13:58.949 "read": true, 00:13:58.949 "write": true, 00:13:58.949 "unmap": true, 00:13:58.949 "flush": true, 00:13:58.949 "reset": true, 00:13:58.949 "nvme_admin": false, 00:13:58.949 "nvme_io": false, 00:13:58.949 "nvme_io_md": false, 00:13:58.949 "write_zeroes": true, 00:13:58.949 "zcopy": true, 00:13:58.949 "get_zone_info": false, 00:13:58.949 "zone_management": false, 00:13:58.949 "zone_append": false, 00:13:58.949 "compare": false, 00:13:58.949 "compare_and_write": false, 00:13:58.949 "abort": true, 00:13:58.949 "seek_hole": false, 00:13:58.949 "seek_data": false, 00:13:58.949 "copy": true, 00:13:58.949 "nvme_iov_md": false 00:13:58.949 }, 00:13:58.949 "memory_domains": [ 00:13:58.949 { 00:13:58.949 "dma_device_id": "system", 00:13:58.949 "dma_device_type": 1 00:13:58.949 }, 00:13:58.949 { 00:13:58.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.949 "dma_device_type": 2 00:13:58.949 } 00:13:58.949 ], 00:13:58.949 "driver_specific": {} 00:13:58.949 } 00:13:58.949 ] 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.949 "name": "Existed_Raid", 00:13:58.949 "uuid": "aa6ab8ff-fc63-420b-9d7a-11eacd8a4f3f", 00:13:58.949 "strip_size_kb": 0, 00:13:58.949 "state": "configuring", 00:13:58.949 "raid_level": "raid1", 00:13:58.949 "superblock": true, 00:13:58.949 "num_base_bdevs": 4, 00:13:58.949 "num_base_bdevs_discovered": 3, 00:13:58.949 "num_base_bdevs_operational": 4, 00:13:58.949 "base_bdevs_list": [ 00:13:58.949 { 00:13:58.949 "name": "BaseBdev1", 00:13:58.949 "uuid": "46c502d7-529d-4450-9f8d-780150b6e8dd", 00:13:58.949 "is_configured": true, 00:13:58.949 "data_offset": 2048, 00:13:58.949 "data_size": 63488 00:13:58.949 }, 00:13:58.949 { 00:13:58.949 "name": null, 00:13:58.949 "uuid": "f22f8018-6698-4e0f-844a-59b8786fca9b", 00:13:58.949 "is_configured": false, 00:13:58.949 "data_offset": 0, 00:13:58.949 "data_size": 63488 00:13:58.949 }, 00:13:58.949 { 00:13:58.949 "name": "BaseBdev3", 00:13:58.949 "uuid": "b8f00c1f-a5e6-410a-8b31-6cf7c9eaa595", 00:13:58.949 "is_configured": true, 00:13:58.949 "data_offset": 2048, 00:13:58.949 "data_size": 63488 00:13:58.949 }, 00:13:58.949 { 00:13:58.949 "name": "BaseBdev4", 00:13:58.949 "uuid": "54c62187-af32-478e-be94-d687b334b848", 00:13:58.949 "is_configured": true, 00:13:58.949 "data_offset": 2048, 00:13:58.949 "data_size": 63488 00:13:58.949 } 00:13:58.949 ] 00:13:58.949 }' 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.949 20:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.516 [2024-10-17 20:10:45.074102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.516 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.517 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.517 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.517 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.517 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.517 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.517 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.517 "name": "Existed_Raid", 00:13:59.517 "uuid": "aa6ab8ff-fc63-420b-9d7a-11eacd8a4f3f", 00:13:59.517 "strip_size_kb": 0, 00:13:59.517 "state": "configuring", 00:13:59.517 "raid_level": "raid1", 00:13:59.517 "superblock": true, 00:13:59.517 "num_base_bdevs": 4, 00:13:59.517 "num_base_bdevs_discovered": 2, 00:13:59.517 "num_base_bdevs_operational": 4, 00:13:59.517 "base_bdevs_list": [ 00:13:59.517 { 00:13:59.517 "name": "BaseBdev1", 00:13:59.517 "uuid": "46c502d7-529d-4450-9f8d-780150b6e8dd", 00:13:59.517 "is_configured": true, 00:13:59.517 "data_offset": 2048, 00:13:59.517 "data_size": 63488 00:13:59.517 }, 00:13:59.517 { 00:13:59.517 "name": null, 00:13:59.517 "uuid": "f22f8018-6698-4e0f-844a-59b8786fca9b", 00:13:59.517 "is_configured": false, 00:13:59.517 "data_offset": 0, 00:13:59.517 "data_size": 63488 00:13:59.517 }, 00:13:59.517 { 00:13:59.517 "name": null, 00:13:59.517 "uuid": "b8f00c1f-a5e6-410a-8b31-6cf7c9eaa595", 00:13:59.517 "is_configured": false, 00:13:59.517 "data_offset": 0, 00:13:59.517 "data_size": 63488 00:13:59.517 }, 00:13:59.517 { 00:13:59.517 "name": "BaseBdev4", 00:13:59.517 "uuid": "54c62187-af32-478e-be94-d687b334b848", 00:13:59.517 "is_configured": true, 00:13:59.517 "data_offset": 2048, 00:13:59.517 "data_size": 63488 00:13:59.517 } 00:13:59.517 ] 00:13:59.517 }' 00:13:59.517 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.517 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.105 [2024-10-17 20:10:45.646280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.105 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.105 "name": "Existed_Raid", 00:14:00.105 "uuid": "aa6ab8ff-fc63-420b-9d7a-11eacd8a4f3f", 00:14:00.105 "strip_size_kb": 0, 00:14:00.105 "state": "configuring", 00:14:00.105 "raid_level": "raid1", 00:14:00.105 "superblock": true, 00:14:00.105 "num_base_bdevs": 4, 00:14:00.105 "num_base_bdevs_discovered": 3, 00:14:00.105 "num_base_bdevs_operational": 4, 00:14:00.105 "base_bdevs_list": [ 00:14:00.105 { 00:14:00.105 "name": "BaseBdev1", 00:14:00.105 "uuid": "46c502d7-529d-4450-9f8d-780150b6e8dd", 00:14:00.105 "is_configured": true, 00:14:00.105 "data_offset": 2048, 00:14:00.105 "data_size": 63488 00:14:00.105 }, 00:14:00.105 { 00:14:00.106 "name": null, 00:14:00.106 "uuid": "f22f8018-6698-4e0f-844a-59b8786fca9b", 00:14:00.106 "is_configured": false, 00:14:00.106 "data_offset": 0, 00:14:00.106 "data_size": 63488 00:14:00.106 }, 00:14:00.106 { 00:14:00.106 "name": "BaseBdev3", 00:14:00.106 "uuid": "b8f00c1f-a5e6-410a-8b31-6cf7c9eaa595", 00:14:00.106 "is_configured": true, 00:14:00.106 "data_offset": 2048, 00:14:00.106 "data_size": 63488 00:14:00.106 }, 00:14:00.106 { 00:14:00.106 "name": "BaseBdev4", 00:14:00.106 "uuid": "54c62187-af32-478e-be94-d687b334b848", 00:14:00.106 "is_configured": true, 00:14:00.106 "data_offset": 2048, 00:14:00.106 "data_size": 63488 00:14:00.106 } 00:14:00.106 ] 00:14:00.106 }' 00:14:00.106 20:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.106 20:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.673 [2024-10-17 20:10:46.222491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.673 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.932 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.932 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.932 "name": "Existed_Raid", 00:14:00.932 "uuid": "aa6ab8ff-fc63-420b-9d7a-11eacd8a4f3f", 00:14:00.932 "strip_size_kb": 0, 00:14:00.932 "state": "configuring", 00:14:00.932 "raid_level": "raid1", 00:14:00.932 "superblock": true, 00:14:00.932 "num_base_bdevs": 4, 00:14:00.932 "num_base_bdevs_discovered": 2, 00:14:00.932 "num_base_bdevs_operational": 4, 00:14:00.932 "base_bdevs_list": [ 00:14:00.932 { 00:14:00.932 "name": null, 00:14:00.932 "uuid": "46c502d7-529d-4450-9f8d-780150b6e8dd", 00:14:00.932 "is_configured": false, 00:14:00.932 "data_offset": 0, 00:14:00.932 "data_size": 63488 00:14:00.932 }, 00:14:00.932 { 00:14:00.932 "name": null, 00:14:00.932 "uuid": "f22f8018-6698-4e0f-844a-59b8786fca9b", 00:14:00.932 "is_configured": false, 00:14:00.932 "data_offset": 0, 00:14:00.932 "data_size": 63488 00:14:00.932 }, 00:14:00.932 { 00:14:00.932 "name": "BaseBdev3", 00:14:00.932 "uuid": "b8f00c1f-a5e6-410a-8b31-6cf7c9eaa595", 00:14:00.932 "is_configured": true, 00:14:00.932 "data_offset": 2048, 00:14:00.932 "data_size": 63488 00:14:00.932 }, 00:14:00.932 { 00:14:00.932 "name": "BaseBdev4", 00:14:00.932 "uuid": "54c62187-af32-478e-be94-d687b334b848", 00:14:00.932 "is_configured": true, 00:14:00.932 "data_offset": 2048, 00:14:00.932 "data_size": 63488 00:14:00.932 } 00:14:00.932 ] 00:14:00.932 }' 00:14:00.932 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.932 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.191 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:01.191 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.191 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.191 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.191 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.451 [2024-10-17 20:10:46.867414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.451 "name": "Existed_Raid", 00:14:01.451 "uuid": "aa6ab8ff-fc63-420b-9d7a-11eacd8a4f3f", 00:14:01.451 "strip_size_kb": 0, 00:14:01.451 "state": "configuring", 00:14:01.451 "raid_level": "raid1", 00:14:01.451 "superblock": true, 00:14:01.451 "num_base_bdevs": 4, 00:14:01.451 "num_base_bdevs_discovered": 3, 00:14:01.451 "num_base_bdevs_operational": 4, 00:14:01.451 "base_bdevs_list": [ 00:14:01.451 { 00:14:01.451 "name": null, 00:14:01.451 "uuid": "46c502d7-529d-4450-9f8d-780150b6e8dd", 00:14:01.451 "is_configured": false, 00:14:01.451 "data_offset": 0, 00:14:01.451 "data_size": 63488 00:14:01.451 }, 00:14:01.451 { 00:14:01.451 "name": "BaseBdev2", 00:14:01.451 "uuid": "f22f8018-6698-4e0f-844a-59b8786fca9b", 00:14:01.451 "is_configured": true, 00:14:01.451 "data_offset": 2048, 00:14:01.451 "data_size": 63488 00:14:01.451 }, 00:14:01.451 { 00:14:01.451 "name": "BaseBdev3", 00:14:01.451 "uuid": "b8f00c1f-a5e6-410a-8b31-6cf7c9eaa595", 00:14:01.451 "is_configured": true, 00:14:01.451 "data_offset": 2048, 00:14:01.451 "data_size": 63488 00:14:01.451 }, 00:14:01.451 { 00:14:01.451 "name": "BaseBdev4", 00:14:01.451 "uuid": "54c62187-af32-478e-be94-d687b334b848", 00:14:01.451 "is_configured": true, 00:14:01.451 "data_offset": 2048, 00:14:01.451 "data_size": 63488 00:14:01.451 } 00:14:01.451 ] 00:14:01.451 }' 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.451 20:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 46c502d7-529d-4450-9f8d-780150b6e8dd 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.047 [2024-10-17 20:10:47.521419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:02.047 [2024-10-17 20:10:47.521702] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:02.047 [2024-10-17 20:10:47.521726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:02.047 [2024-10-17 20:10:47.522103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:02.047 NewBaseBdev 00:14:02.047 [2024-10-17 20:10:47.522322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:02.047 [2024-10-17 20:10:47.522339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:02.047 [2024-10-17 20:10:47.522502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.047 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.048 [ 00:14:02.048 { 00:14:02.048 "name": "NewBaseBdev", 00:14:02.048 "aliases": [ 00:14:02.048 "46c502d7-529d-4450-9f8d-780150b6e8dd" 00:14:02.048 ], 00:14:02.048 "product_name": "Malloc disk", 00:14:02.048 "block_size": 512, 00:14:02.048 "num_blocks": 65536, 00:14:02.048 "uuid": "46c502d7-529d-4450-9f8d-780150b6e8dd", 00:14:02.048 "assigned_rate_limits": { 00:14:02.048 "rw_ios_per_sec": 0, 00:14:02.048 "rw_mbytes_per_sec": 0, 00:14:02.048 "r_mbytes_per_sec": 0, 00:14:02.048 "w_mbytes_per_sec": 0 00:14:02.048 }, 00:14:02.048 "claimed": true, 00:14:02.048 "claim_type": "exclusive_write", 00:14:02.048 "zoned": false, 00:14:02.048 "supported_io_types": { 00:14:02.048 "read": true, 00:14:02.048 "write": true, 00:14:02.048 "unmap": true, 00:14:02.048 "flush": true, 00:14:02.048 "reset": true, 00:14:02.048 "nvme_admin": false, 00:14:02.048 "nvme_io": false, 00:14:02.048 "nvme_io_md": false, 00:14:02.048 "write_zeroes": true, 00:14:02.048 "zcopy": true, 00:14:02.048 "get_zone_info": false, 00:14:02.048 "zone_management": false, 00:14:02.048 "zone_append": false, 00:14:02.048 "compare": false, 00:14:02.048 "compare_and_write": false, 00:14:02.048 "abort": true, 00:14:02.048 "seek_hole": false, 00:14:02.048 "seek_data": false, 00:14:02.048 "copy": true, 00:14:02.048 "nvme_iov_md": false 00:14:02.048 }, 00:14:02.048 "memory_domains": [ 00:14:02.048 { 00:14:02.048 "dma_device_id": "system", 00:14:02.048 "dma_device_type": 1 00:14:02.048 }, 00:14:02.048 { 00:14:02.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.048 "dma_device_type": 2 00:14:02.048 } 00:14:02.048 ], 00:14:02.048 "driver_specific": {} 00:14:02.048 } 00:14:02.048 ] 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.048 "name": "Existed_Raid", 00:14:02.048 "uuid": "aa6ab8ff-fc63-420b-9d7a-11eacd8a4f3f", 00:14:02.048 "strip_size_kb": 0, 00:14:02.048 "state": "online", 00:14:02.048 "raid_level": "raid1", 00:14:02.048 "superblock": true, 00:14:02.048 "num_base_bdevs": 4, 00:14:02.048 "num_base_bdevs_discovered": 4, 00:14:02.048 "num_base_bdevs_operational": 4, 00:14:02.048 "base_bdevs_list": [ 00:14:02.048 { 00:14:02.048 "name": "NewBaseBdev", 00:14:02.048 "uuid": "46c502d7-529d-4450-9f8d-780150b6e8dd", 00:14:02.048 "is_configured": true, 00:14:02.048 "data_offset": 2048, 00:14:02.048 "data_size": 63488 00:14:02.048 }, 00:14:02.048 { 00:14:02.048 "name": "BaseBdev2", 00:14:02.048 "uuid": "f22f8018-6698-4e0f-844a-59b8786fca9b", 00:14:02.048 "is_configured": true, 00:14:02.048 "data_offset": 2048, 00:14:02.048 "data_size": 63488 00:14:02.048 }, 00:14:02.048 { 00:14:02.048 "name": "BaseBdev3", 00:14:02.048 "uuid": "b8f00c1f-a5e6-410a-8b31-6cf7c9eaa595", 00:14:02.048 "is_configured": true, 00:14:02.048 "data_offset": 2048, 00:14:02.048 "data_size": 63488 00:14:02.048 }, 00:14:02.048 { 00:14:02.048 "name": "BaseBdev4", 00:14:02.048 "uuid": "54c62187-af32-478e-be94-d687b334b848", 00:14:02.048 "is_configured": true, 00:14:02.048 "data_offset": 2048, 00:14:02.048 "data_size": 63488 00:14:02.048 } 00:14:02.048 ] 00:14:02.048 }' 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.048 20:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.617 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:02.617 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:02.617 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:02.617 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:02.617 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:02.617 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:02.617 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:02.617 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:02.617 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.617 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.617 [2024-10-17 20:10:48.074031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.617 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.617 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:02.617 "name": "Existed_Raid", 00:14:02.617 "aliases": [ 00:14:02.617 "aa6ab8ff-fc63-420b-9d7a-11eacd8a4f3f" 00:14:02.617 ], 00:14:02.617 "product_name": "Raid Volume", 00:14:02.617 "block_size": 512, 00:14:02.617 "num_blocks": 63488, 00:14:02.617 "uuid": "aa6ab8ff-fc63-420b-9d7a-11eacd8a4f3f", 00:14:02.617 "assigned_rate_limits": { 00:14:02.617 "rw_ios_per_sec": 0, 00:14:02.617 "rw_mbytes_per_sec": 0, 00:14:02.618 "r_mbytes_per_sec": 0, 00:14:02.618 "w_mbytes_per_sec": 0 00:14:02.618 }, 00:14:02.618 "claimed": false, 00:14:02.618 "zoned": false, 00:14:02.618 "supported_io_types": { 00:14:02.618 "read": true, 00:14:02.618 "write": true, 00:14:02.618 "unmap": false, 00:14:02.618 "flush": false, 00:14:02.618 "reset": true, 00:14:02.618 "nvme_admin": false, 00:14:02.618 "nvme_io": false, 00:14:02.618 "nvme_io_md": false, 00:14:02.618 "write_zeroes": true, 00:14:02.618 "zcopy": false, 00:14:02.618 "get_zone_info": false, 00:14:02.618 "zone_management": false, 00:14:02.618 "zone_append": false, 00:14:02.618 "compare": false, 00:14:02.618 "compare_and_write": false, 00:14:02.618 "abort": false, 00:14:02.618 "seek_hole": false, 00:14:02.618 "seek_data": false, 00:14:02.618 "copy": false, 00:14:02.618 "nvme_iov_md": false 00:14:02.618 }, 00:14:02.618 "memory_domains": [ 00:14:02.618 { 00:14:02.618 "dma_device_id": "system", 00:14:02.618 "dma_device_type": 1 00:14:02.618 }, 00:14:02.618 { 00:14:02.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.618 "dma_device_type": 2 00:14:02.618 }, 00:14:02.618 { 00:14:02.618 "dma_device_id": "system", 00:14:02.618 "dma_device_type": 1 00:14:02.618 }, 00:14:02.618 { 00:14:02.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.618 "dma_device_type": 2 00:14:02.618 }, 00:14:02.618 { 00:14:02.618 "dma_device_id": "system", 00:14:02.618 "dma_device_type": 1 00:14:02.618 }, 00:14:02.618 { 00:14:02.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.618 "dma_device_type": 2 00:14:02.618 }, 00:14:02.618 { 00:14:02.618 "dma_device_id": "system", 00:14:02.618 "dma_device_type": 1 00:14:02.618 }, 00:14:02.618 { 00:14:02.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.618 "dma_device_type": 2 00:14:02.618 } 00:14:02.618 ], 00:14:02.618 "driver_specific": { 00:14:02.618 "raid": { 00:14:02.618 "uuid": "aa6ab8ff-fc63-420b-9d7a-11eacd8a4f3f", 00:14:02.618 "strip_size_kb": 0, 00:14:02.618 "state": "online", 00:14:02.618 "raid_level": "raid1", 00:14:02.618 "superblock": true, 00:14:02.618 "num_base_bdevs": 4, 00:14:02.618 "num_base_bdevs_discovered": 4, 00:14:02.618 "num_base_bdevs_operational": 4, 00:14:02.618 "base_bdevs_list": [ 00:14:02.618 { 00:14:02.618 "name": "NewBaseBdev", 00:14:02.618 "uuid": "46c502d7-529d-4450-9f8d-780150b6e8dd", 00:14:02.618 "is_configured": true, 00:14:02.618 "data_offset": 2048, 00:14:02.618 "data_size": 63488 00:14:02.618 }, 00:14:02.618 { 00:14:02.618 "name": "BaseBdev2", 00:14:02.618 "uuid": "f22f8018-6698-4e0f-844a-59b8786fca9b", 00:14:02.618 "is_configured": true, 00:14:02.618 "data_offset": 2048, 00:14:02.618 "data_size": 63488 00:14:02.618 }, 00:14:02.618 { 00:14:02.618 "name": "BaseBdev3", 00:14:02.618 "uuid": "b8f00c1f-a5e6-410a-8b31-6cf7c9eaa595", 00:14:02.618 "is_configured": true, 00:14:02.618 "data_offset": 2048, 00:14:02.618 "data_size": 63488 00:14:02.618 }, 00:14:02.618 { 00:14:02.618 "name": "BaseBdev4", 00:14:02.618 "uuid": "54c62187-af32-478e-be94-d687b334b848", 00:14:02.618 "is_configured": true, 00:14:02.618 "data_offset": 2048, 00:14:02.618 "data_size": 63488 00:14:02.618 } 00:14:02.618 ] 00:14:02.618 } 00:14:02.618 } 00:14:02.618 }' 00:14:02.618 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.618 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:02.618 BaseBdev2 00:14:02.618 BaseBdev3 00:14:02.618 BaseBdev4' 00:14:02.618 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.618 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:02.618 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.618 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.618 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:02.618 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.618 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.618 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.877 [2024-10-17 20:10:48.437754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:02.877 [2024-10-17 20:10:48.437792] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.877 [2024-10-17 20:10:48.437889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.877 [2024-10-17 20:10:48.438274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.877 [2024-10-17 20:10:48.438307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.877 20:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73907 00:14:02.878 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73907 ']' 00:14:02.878 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73907 00:14:02.878 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:02.878 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:02.878 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73907 00:14:02.878 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:02.878 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:02.878 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73907' 00:14:02.878 killing process with pid 73907 00:14:02.878 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73907 00:14:02.878 [2024-10-17 20:10:48.477416] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.878 20:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73907 00:14:03.492 [2024-10-17 20:10:48.829743] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.427 20:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:04.427 00:14:04.427 real 0m12.719s 00:14:04.427 user 0m21.163s 00:14:04.427 sys 0m1.795s 00:14:04.427 20:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.427 20:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.427 ************************************ 00:14:04.427 END TEST raid_state_function_test_sb 00:14:04.427 ************************************ 00:14:04.427 20:10:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:14:04.427 20:10:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:04.427 20:10:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.427 20:10:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.427 ************************************ 00:14:04.427 START TEST raid_superblock_test 00:14:04.427 ************************************ 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74594 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74594 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74594 ']' 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:04.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:04.427 20:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.427 [2024-10-17 20:10:50.047032] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:14:04.427 [2024-10-17 20:10:50.047194] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74594 ] 00:14:04.686 [2024-10-17 20:10:50.213040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.944 [2024-10-17 20:10:50.345225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.944 [2024-10-17 20:10:50.546953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.944 [2024-10-17 20:10:50.547047] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.513 20:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:05.513 20:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:05.513 20:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:05.513 20:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:05.513 20:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:05.513 20:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:05.513 20:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:05.513 20:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:05.513 20:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:05.513 20:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:05.513 20:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:05.513 20:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.513 20:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.513 malloc1 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.513 [2024-10-17 20:10:51.016792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:05.513 [2024-10-17 20:10:51.016874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.513 [2024-10-17 20:10:51.016910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:05.513 [2024-10-17 20:10:51.016926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.513 [2024-10-17 20:10:51.019684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.513 [2024-10-17 20:10:51.019725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:05.513 pt1 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.513 malloc2 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.513 [2024-10-17 20:10:51.072602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:05.513 [2024-10-17 20:10:51.072675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.513 [2024-10-17 20:10:51.072708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:05.513 [2024-10-17 20:10:51.072723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.513 [2024-10-17 20:10:51.075487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.513 [2024-10-17 20:10:51.075533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:05.513 pt2 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.513 malloc3 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.513 [2024-10-17 20:10:51.142588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:05.513 [2024-10-17 20:10:51.142795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.513 [2024-10-17 20:10:51.142841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:05.513 [2024-10-17 20:10:51.142859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.513 [2024-10-17 20:10:51.145573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.513 [2024-10-17 20:10:51.145620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:05.513 pt3 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.513 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.772 malloc4 00:14:05.772 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.772 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:05.772 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.772 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.772 [2024-10-17 20:10:51.198365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:05.772 [2024-10-17 20:10:51.198560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.772 [2024-10-17 20:10:51.198635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:05.772 [2024-10-17 20:10:51.198744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.772 [2024-10-17 20:10:51.201646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.772 [2024-10-17 20:10:51.201810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:05.772 pt4 00:14:05.772 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.772 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.773 [2024-10-17 20:10:51.210551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:05.773 [2024-10-17 20:10:51.212939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:05.773 [2024-10-17 20:10:51.213172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:05.773 [2024-10-17 20:10:51.213253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:05.773 [2024-10-17 20:10:51.213504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:05.773 [2024-10-17 20:10:51.213539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:05.773 [2024-10-17 20:10:51.213892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:05.773 [2024-10-17 20:10:51.214129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:05.773 [2024-10-17 20:10:51.214151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:05.773 [2024-10-17 20:10:51.214332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.773 "name": "raid_bdev1", 00:14:05.773 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:05.773 "strip_size_kb": 0, 00:14:05.773 "state": "online", 00:14:05.773 "raid_level": "raid1", 00:14:05.773 "superblock": true, 00:14:05.773 "num_base_bdevs": 4, 00:14:05.773 "num_base_bdevs_discovered": 4, 00:14:05.773 "num_base_bdevs_operational": 4, 00:14:05.773 "base_bdevs_list": [ 00:14:05.773 { 00:14:05.773 "name": "pt1", 00:14:05.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.773 "is_configured": true, 00:14:05.773 "data_offset": 2048, 00:14:05.773 "data_size": 63488 00:14:05.773 }, 00:14:05.773 { 00:14:05.773 "name": "pt2", 00:14:05.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.773 "is_configured": true, 00:14:05.773 "data_offset": 2048, 00:14:05.773 "data_size": 63488 00:14:05.773 }, 00:14:05.773 { 00:14:05.773 "name": "pt3", 00:14:05.773 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.773 "is_configured": true, 00:14:05.773 "data_offset": 2048, 00:14:05.773 "data_size": 63488 00:14:05.773 }, 00:14:05.773 { 00:14:05.773 "name": "pt4", 00:14:05.773 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.773 "is_configured": true, 00:14:05.773 "data_offset": 2048, 00:14:05.773 "data_size": 63488 00:14:05.773 } 00:14:05.773 ] 00:14:05.773 }' 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.773 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.342 [2024-10-17 20:10:51.759096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:06.342 "name": "raid_bdev1", 00:14:06.342 "aliases": [ 00:14:06.342 "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf" 00:14:06.342 ], 00:14:06.342 "product_name": "Raid Volume", 00:14:06.342 "block_size": 512, 00:14:06.342 "num_blocks": 63488, 00:14:06.342 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:06.342 "assigned_rate_limits": { 00:14:06.342 "rw_ios_per_sec": 0, 00:14:06.342 "rw_mbytes_per_sec": 0, 00:14:06.342 "r_mbytes_per_sec": 0, 00:14:06.342 "w_mbytes_per_sec": 0 00:14:06.342 }, 00:14:06.342 "claimed": false, 00:14:06.342 "zoned": false, 00:14:06.342 "supported_io_types": { 00:14:06.342 "read": true, 00:14:06.342 "write": true, 00:14:06.342 "unmap": false, 00:14:06.342 "flush": false, 00:14:06.342 "reset": true, 00:14:06.342 "nvme_admin": false, 00:14:06.342 "nvme_io": false, 00:14:06.342 "nvme_io_md": false, 00:14:06.342 "write_zeroes": true, 00:14:06.342 "zcopy": false, 00:14:06.342 "get_zone_info": false, 00:14:06.342 "zone_management": false, 00:14:06.342 "zone_append": false, 00:14:06.342 "compare": false, 00:14:06.342 "compare_and_write": false, 00:14:06.342 "abort": false, 00:14:06.342 "seek_hole": false, 00:14:06.342 "seek_data": false, 00:14:06.342 "copy": false, 00:14:06.342 "nvme_iov_md": false 00:14:06.342 }, 00:14:06.342 "memory_domains": [ 00:14:06.342 { 00:14:06.342 "dma_device_id": "system", 00:14:06.342 "dma_device_type": 1 00:14:06.342 }, 00:14:06.342 { 00:14:06.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.342 "dma_device_type": 2 00:14:06.342 }, 00:14:06.342 { 00:14:06.342 "dma_device_id": "system", 00:14:06.342 "dma_device_type": 1 00:14:06.342 }, 00:14:06.342 { 00:14:06.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.342 "dma_device_type": 2 00:14:06.342 }, 00:14:06.342 { 00:14:06.342 "dma_device_id": "system", 00:14:06.342 "dma_device_type": 1 00:14:06.342 }, 00:14:06.342 { 00:14:06.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.342 "dma_device_type": 2 00:14:06.342 }, 00:14:06.342 { 00:14:06.342 "dma_device_id": "system", 00:14:06.342 "dma_device_type": 1 00:14:06.342 }, 00:14:06.342 { 00:14:06.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.342 "dma_device_type": 2 00:14:06.342 } 00:14:06.342 ], 00:14:06.342 "driver_specific": { 00:14:06.342 "raid": { 00:14:06.342 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:06.342 "strip_size_kb": 0, 00:14:06.342 "state": "online", 00:14:06.342 "raid_level": "raid1", 00:14:06.342 "superblock": true, 00:14:06.342 "num_base_bdevs": 4, 00:14:06.342 "num_base_bdevs_discovered": 4, 00:14:06.342 "num_base_bdevs_operational": 4, 00:14:06.342 "base_bdevs_list": [ 00:14:06.342 { 00:14:06.342 "name": "pt1", 00:14:06.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.342 "is_configured": true, 00:14:06.342 "data_offset": 2048, 00:14:06.342 "data_size": 63488 00:14:06.342 }, 00:14:06.342 { 00:14:06.342 "name": "pt2", 00:14:06.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.342 "is_configured": true, 00:14:06.342 "data_offset": 2048, 00:14:06.342 "data_size": 63488 00:14:06.342 }, 00:14:06.342 { 00:14:06.342 "name": "pt3", 00:14:06.342 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.342 "is_configured": true, 00:14:06.342 "data_offset": 2048, 00:14:06.342 "data_size": 63488 00:14:06.342 }, 00:14:06.342 { 00:14:06.342 "name": "pt4", 00:14:06.342 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:06.342 "is_configured": true, 00:14:06.342 "data_offset": 2048, 00:14:06.342 "data_size": 63488 00:14:06.342 } 00:14:06.342 ] 00:14:06.342 } 00:14:06.342 } 00:14:06.342 }' 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:06.342 pt2 00:14:06.342 pt3 00:14:06.342 pt4' 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.342 20:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.602 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.602 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.602 20:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.602 [2024-10-17 20:10:52.119107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=13236ed8-b9e9-40bb-b999-c26dbdfb9eaf 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 13236ed8-b9e9-40bb-b999-c26dbdfb9eaf ']' 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.602 [2024-10-17 20:10:52.162736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.602 [2024-10-17 20:10:52.162768] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.602 [2024-10-17 20:10:52.162875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.602 [2024-10-17 20:10:52.162984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.602 [2024-10-17 20:10:52.163024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.602 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.861 [2024-10-17 20:10:52.326778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:06.861 [2024-10-17 20:10:52.329253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:06.861 [2024-10-17 20:10:52.329326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:06.861 [2024-10-17 20:10:52.329381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:06.861 [2024-10-17 20:10:52.329454] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:06.861 [2024-10-17 20:10:52.329530] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:06.861 [2024-10-17 20:10:52.329566] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:06.861 [2024-10-17 20:10:52.329599] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:06.861 [2024-10-17 20:10:52.329622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.861 [2024-10-17 20:10:52.329639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:06.861 request: 00:14:06.861 { 00:14:06.861 "name": "raid_bdev1", 00:14:06.861 "raid_level": "raid1", 00:14:06.861 "base_bdevs": [ 00:14:06.861 "malloc1", 00:14:06.861 "malloc2", 00:14:06.861 "malloc3", 00:14:06.861 "malloc4" 00:14:06.861 ], 00:14:06.861 "superblock": false, 00:14:06.861 "method": "bdev_raid_create", 00:14:06.861 "req_id": 1 00:14:06.861 } 00:14:06.861 Got JSON-RPC error response 00:14:06.861 response: 00:14:06.861 { 00:14:06.861 "code": -17, 00:14:06.861 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:06.861 } 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.861 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.861 [2024-10-17 20:10:52.394785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:06.861 [2024-10-17 20:10:52.394865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.861 [2024-10-17 20:10:52.394891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:06.861 [2024-10-17 20:10:52.394914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.861 [2024-10-17 20:10:52.397809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.862 [2024-10-17 20:10:52.397864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:06.862 [2024-10-17 20:10:52.397973] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:06.862 [2024-10-17 20:10:52.398077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:06.862 pt1 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.862 "name": "raid_bdev1", 00:14:06.862 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:06.862 "strip_size_kb": 0, 00:14:06.862 "state": "configuring", 00:14:06.862 "raid_level": "raid1", 00:14:06.862 "superblock": true, 00:14:06.862 "num_base_bdevs": 4, 00:14:06.862 "num_base_bdevs_discovered": 1, 00:14:06.862 "num_base_bdevs_operational": 4, 00:14:06.862 "base_bdevs_list": [ 00:14:06.862 { 00:14:06.862 "name": "pt1", 00:14:06.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.862 "is_configured": true, 00:14:06.862 "data_offset": 2048, 00:14:06.862 "data_size": 63488 00:14:06.862 }, 00:14:06.862 { 00:14:06.862 "name": null, 00:14:06.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.862 "is_configured": false, 00:14:06.862 "data_offset": 2048, 00:14:06.862 "data_size": 63488 00:14:06.862 }, 00:14:06.862 { 00:14:06.862 "name": null, 00:14:06.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.862 "is_configured": false, 00:14:06.862 "data_offset": 2048, 00:14:06.862 "data_size": 63488 00:14:06.862 }, 00:14:06.862 { 00:14:06.862 "name": null, 00:14:06.862 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:06.862 "is_configured": false, 00:14:06.862 "data_offset": 2048, 00:14:06.862 "data_size": 63488 00:14:06.862 } 00:14:06.862 ] 00:14:06.862 }' 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.862 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.430 [2024-10-17 20:10:52.934919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:07.430 [2024-10-17 20:10:52.935018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.430 [2024-10-17 20:10:52.935050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:07.430 [2024-10-17 20:10:52.935069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.430 [2024-10-17 20:10:52.935655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.430 [2024-10-17 20:10:52.935705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:07.430 [2024-10-17 20:10:52.935810] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:07.430 [2024-10-17 20:10:52.935857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:07.430 pt2 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.430 [2024-10-17 20:10:52.942913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.430 20:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.430 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.430 "name": "raid_bdev1", 00:14:07.430 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:07.430 "strip_size_kb": 0, 00:14:07.430 "state": "configuring", 00:14:07.430 "raid_level": "raid1", 00:14:07.430 "superblock": true, 00:14:07.430 "num_base_bdevs": 4, 00:14:07.430 "num_base_bdevs_discovered": 1, 00:14:07.430 "num_base_bdevs_operational": 4, 00:14:07.430 "base_bdevs_list": [ 00:14:07.430 { 00:14:07.430 "name": "pt1", 00:14:07.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:07.430 "is_configured": true, 00:14:07.430 "data_offset": 2048, 00:14:07.430 "data_size": 63488 00:14:07.430 }, 00:14:07.430 { 00:14:07.430 "name": null, 00:14:07.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.430 "is_configured": false, 00:14:07.430 "data_offset": 0, 00:14:07.430 "data_size": 63488 00:14:07.430 }, 00:14:07.430 { 00:14:07.430 "name": null, 00:14:07.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.430 "is_configured": false, 00:14:07.430 "data_offset": 2048, 00:14:07.430 "data_size": 63488 00:14:07.430 }, 00:14:07.430 { 00:14:07.430 "name": null, 00:14:07.431 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:07.431 "is_configured": false, 00:14:07.431 "data_offset": 2048, 00:14:07.431 "data_size": 63488 00:14:07.431 } 00:14:07.431 ] 00:14:07.431 }' 00:14:07.431 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.431 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.998 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:07.998 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.998 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:07.998 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.998 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.998 [2024-10-17 20:10:53.443092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:07.998 [2024-10-17 20:10:53.443170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.998 [2024-10-17 20:10:53.443209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:07.998 [2024-10-17 20:10:53.443226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.999 [2024-10-17 20:10:53.443784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.999 [2024-10-17 20:10:53.443820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:07.999 [2024-10-17 20:10:53.443940] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:07.999 [2024-10-17 20:10:53.443971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:07.999 pt2 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.999 [2024-10-17 20:10:53.451047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:07.999 [2024-10-17 20:10:53.451103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.999 [2024-10-17 20:10:53.451131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:07.999 [2024-10-17 20:10:53.451145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.999 [2024-10-17 20:10:53.451589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.999 [2024-10-17 20:10:53.451630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:07.999 [2024-10-17 20:10:53.451709] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:07.999 [2024-10-17 20:10:53.451737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:07.999 pt3 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.999 [2024-10-17 20:10:53.459019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:07.999 [2024-10-17 20:10:53.459070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.999 [2024-10-17 20:10:53.459095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:07.999 [2024-10-17 20:10:53.459109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.999 [2024-10-17 20:10:53.459556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.999 [2024-10-17 20:10:53.459597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:07.999 [2024-10-17 20:10:53.459678] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:07.999 [2024-10-17 20:10:53.459705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:07.999 [2024-10-17 20:10:53.459884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:07.999 [2024-10-17 20:10:53.459909] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:07.999 [2024-10-17 20:10:53.460275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:07.999 [2024-10-17 20:10:53.460485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:07.999 [2024-10-17 20:10:53.460515] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:07.999 [2024-10-17 20:10:53.460676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.999 pt4 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.999 "name": "raid_bdev1", 00:14:07.999 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:07.999 "strip_size_kb": 0, 00:14:07.999 "state": "online", 00:14:07.999 "raid_level": "raid1", 00:14:07.999 "superblock": true, 00:14:07.999 "num_base_bdevs": 4, 00:14:07.999 "num_base_bdevs_discovered": 4, 00:14:07.999 "num_base_bdevs_operational": 4, 00:14:07.999 "base_bdevs_list": [ 00:14:07.999 { 00:14:07.999 "name": "pt1", 00:14:07.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:07.999 "is_configured": true, 00:14:07.999 "data_offset": 2048, 00:14:07.999 "data_size": 63488 00:14:07.999 }, 00:14:07.999 { 00:14:07.999 "name": "pt2", 00:14:07.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.999 "is_configured": true, 00:14:07.999 "data_offset": 2048, 00:14:07.999 "data_size": 63488 00:14:07.999 }, 00:14:07.999 { 00:14:07.999 "name": "pt3", 00:14:07.999 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.999 "is_configured": true, 00:14:07.999 "data_offset": 2048, 00:14:07.999 "data_size": 63488 00:14:07.999 }, 00:14:07.999 { 00:14:07.999 "name": "pt4", 00:14:07.999 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:07.999 "is_configured": true, 00:14:07.999 "data_offset": 2048, 00:14:07.999 "data_size": 63488 00:14:07.999 } 00:14:07.999 ] 00:14:07.999 }' 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.999 20:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.567 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:08.567 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:08.567 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:08.567 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:08.567 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:08.567 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:08.567 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:08.567 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.567 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:08.567 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.567 [2024-10-17 20:10:54.035642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.567 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.567 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:08.567 "name": "raid_bdev1", 00:14:08.567 "aliases": [ 00:14:08.567 "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf" 00:14:08.567 ], 00:14:08.567 "product_name": "Raid Volume", 00:14:08.568 "block_size": 512, 00:14:08.568 "num_blocks": 63488, 00:14:08.568 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:08.568 "assigned_rate_limits": { 00:14:08.568 "rw_ios_per_sec": 0, 00:14:08.568 "rw_mbytes_per_sec": 0, 00:14:08.568 "r_mbytes_per_sec": 0, 00:14:08.568 "w_mbytes_per_sec": 0 00:14:08.568 }, 00:14:08.568 "claimed": false, 00:14:08.568 "zoned": false, 00:14:08.568 "supported_io_types": { 00:14:08.568 "read": true, 00:14:08.568 "write": true, 00:14:08.568 "unmap": false, 00:14:08.568 "flush": false, 00:14:08.568 "reset": true, 00:14:08.568 "nvme_admin": false, 00:14:08.568 "nvme_io": false, 00:14:08.568 "nvme_io_md": false, 00:14:08.568 "write_zeroes": true, 00:14:08.568 "zcopy": false, 00:14:08.568 "get_zone_info": false, 00:14:08.568 "zone_management": false, 00:14:08.568 "zone_append": false, 00:14:08.568 "compare": false, 00:14:08.568 "compare_and_write": false, 00:14:08.568 "abort": false, 00:14:08.568 "seek_hole": false, 00:14:08.568 "seek_data": false, 00:14:08.568 "copy": false, 00:14:08.568 "nvme_iov_md": false 00:14:08.568 }, 00:14:08.568 "memory_domains": [ 00:14:08.568 { 00:14:08.568 "dma_device_id": "system", 00:14:08.568 "dma_device_type": 1 00:14:08.568 }, 00:14:08.568 { 00:14:08.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.568 "dma_device_type": 2 00:14:08.568 }, 00:14:08.568 { 00:14:08.568 "dma_device_id": "system", 00:14:08.568 "dma_device_type": 1 00:14:08.568 }, 00:14:08.568 { 00:14:08.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.568 "dma_device_type": 2 00:14:08.568 }, 00:14:08.568 { 00:14:08.568 "dma_device_id": "system", 00:14:08.568 "dma_device_type": 1 00:14:08.568 }, 00:14:08.568 { 00:14:08.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.568 "dma_device_type": 2 00:14:08.568 }, 00:14:08.568 { 00:14:08.568 "dma_device_id": "system", 00:14:08.568 "dma_device_type": 1 00:14:08.568 }, 00:14:08.568 { 00:14:08.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.568 "dma_device_type": 2 00:14:08.568 } 00:14:08.568 ], 00:14:08.568 "driver_specific": { 00:14:08.568 "raid": { 00:14:08.568 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:08.568 "strip_size_kb": 0, 00:14:08.568 "state": "online", 00:14:08.568 "raid_level": "raid1", 00:14:08.568 "superblock": true, 00:14:08.568 "num_base_bdevs": 4, 00:14:08.568 "num_base_bdevs_discovered": 4, 00:14:08.568 "num_base_bdevs_operational": 4, 00:14:08.568 "base_bdevs_list": [ 00:14:08.568 { 00:14:08.568 "name": "pt1", 00:14:08.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:08.568 "is_configured": true, 00:14:08.568 "data_offset": 2048, 00:14:08.568 "data_size": 63488 00:14:08.568 }, 00:14:08.568 { 00:14:08.568 "name": "pt2", 00:14:08.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:08.568 "is_configured": true, 00:14:08.568 "data_offset": 2048, 00:14:08.568 "data_size": 63488 00:14:08.568 }, 00:14:08.568 { 00:14:08.568 "name": "pt3", 00:14:08.568 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:08.568 "is_configured": true, 00:14:08.568 "data_offset": 2048, 00:14:08.568 "data_size": 63488 00:14:08.568 }, 00:14:08.568 { 00:14:08.568 "name": "pt4", 00:14:08.568 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:08.568 "is_configured": true, 00:14:08.568 "data_offset": 2048, 00:14:08.568 "data_size": 63488 00:14:08.568 } 00:14:08.568 ] 00:14:08.568 } 00:14:08.568 } 00:14:08.568 }' 00:14:08.568 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.568 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:08.568 pt2 00:14:08.568 pt3 00:14:08.568 pt4' 00:14:08.568 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.568 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:08.568 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.568 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.568 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:08.568 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.568 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.568 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:08.828 [2024-10-17 20:10:54.415695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 13236ed8-b9e9-40bb-b999-c26dbdfb9eaf '!=' 13236ed8-b9e9-40bb-b999-c26dbdfb9eaf ']' 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.828 [2024-10-17 20:10:54.463345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.828 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.829 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.829 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.829 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.829 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.829 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.829 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.829 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.829 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.829 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.829 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.089 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.089 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.089 "name": "raid_bdev1", 00:14:09.089 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:09.089 "strip_size_kb": 0, 00:14:09.089 "state": "online", 00:14:09.089 "raid_level": "raid1", 00:14:09.089 "superblock": true, 00:14:09.089 "num_base_bdevs": 4, 00:14:09.089 "num_base_bdevs_discovered": 3, 00:14:09.089 "num_base_bdevs_operational": 3, 00:14:09.089 "base_bdevs_list": [ 00:14:09.089 { 00:14:09.089 "name": null, 00:14:09.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.089 "is_configured": false, 00:14:09.089 "data_offset": 0, 00:14:09.089 "data_size": 63488 00:14:09.089 }, 00:14:09.089 { 00:14:09.089 "name": "pt2", 00:14:09.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.089 "is_configured": true, 00:14:09.089 "data_offset": 2048, 00:14:09.089 "data_size": 63488 00:14:09.089 }, 00:14:09.089 { 00:14:09.089 "name": "pt3", 00:14:09.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:09.089 "is_configured": true, 00:14:09.089 "data_offset": 2048, 00:14:09.089 "data_size": 63488 00:14:09.089 }, 00:14:09.089 { 00:14:09.089 "name": "pt4", 00:14:09.089 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:09.089 "is_configured": true, 00:14:09.089 "data_offset": 2048, 00:14:09.089 "data_size": 63488 00:14:09.089 } 00:14:09.089 ] 00:14:09.089 }' 00:14:09.089 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.089 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.347 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:09.347 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.347 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.347 [2024-10-17 20:10:54.991429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:09.347 [2024-10-17 20:10:54.991471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.347 [2024-10-17 20:10:54.991575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.347 [2024-10-17 20:10:54.991683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.347 [2024-10-17 20:10:54.991699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:09.347 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.347 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:09.347 20:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.605 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.605 20:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.605 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.605 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.606 [2024-10-17 20:10:55.083426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:09.606 [2024-10-17 20:10:55.083502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.606 [2024-10-17 20:10:55.083535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:09.606 [2024-10-17 20:10:55.083550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.606 [2024-10-17 20:10:55.086489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.606 [2024-10-17 20:10:55.086653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:09.606 [2024-10-17 20:10:55.086775] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:09.606 [2024-10-17 20:10:55.086837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:09.606 pt2 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.606 "name": "raid_bdev1", 00:14:09.606 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:09.606 "strip_size_kb": 0, 00:14:09.606 "state": "configuring", 00:14:09.606 "raid_level": "raid1", 00:14:09.606 "superblock": true, 00:14:09.606 "num_base_bdevs": 4, 00:14:09.606 "num_base_bdevs_discovered": 1, 00:14:09.606 "num_base_bdevs_operational": 3, 00:14:09.606 "base_bdevs_list": [ 00:14:09.606 { 00:14:09.606 "name": null, 00:14:09.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.606 "is_configured": false, 00:14:09.606 "data_offset": 2048, 00:14:09.606 "data_size": 63488 00:14:09.606 }, 00:14:09.606 { 00:14:09.606 "name": "pt2", 00:14:09.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.606 "is_configured": true, 00:14:09.606 "data_offset": 2048, 00:14:09.606 "data_size": 63488 00:14:09.606 }, 00:14:09.606 { 00:14:09.606 "name": null, 00:14:09.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:09.606 "is_configured": false, 00:14:09.606 "data_offset": 2048, 00:14:09.606 "data_size": 63488 00:14:09.606 }, 00:14:09.606 { 00:14:09.606 "name": null, 00:14:09.606 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:09.606 "is_configured": false, 00:14:09.606 "data_offset": 2048, 00:14:09.606 "data_size": 63488 00:14:09.606 } 00:14:09.606 ] 00:14:09.606 }' 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.606 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.173 [2024-10-17 20:10:55.623656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:10.173 [2024-10-17 20:10:55.623745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.173 [2024-10-17 20:10:55.623793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:10.173 [2024-10-17 20:10:55.623821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.173 [2024-10-17 20:10:55.624498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.173 [2024-10-17 20:10:55.624522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:10.173 [2024-10-17 20:10:55.624633] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:10.173 [2024-10-17 20:10:55.624663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:10.173 pt3 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.173 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.173 "name": "raid_bdev1", 00:14:10.173 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:10.173 "strip_size_kb": 0, 00:14:10.173 "state": "configuring", 00:14:10.173 "raid_level": "raid1", 00:14:10.173 "superblock": true, 00:14:10.173 "num_base_bdevs": 4, 00:14:10.173 "num_base_bdevs_discovered": 2, 00:14:10.173 "num_base_bdevs_operational": 3, 00:14:10.173 "base_bdevs_list": [ 00:14:10.173 { 00:14:10.173 "name": null, 00:14:10.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.173 "is_configured": false, 00:14:10.173 "data_offset": 2048, 00:14:10.173 "data_size": 63488 00:14:10.173 }, 00:14:10.173 { 00:14:10.173 "name": "pt2", 00:14:10.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:10.173 "is_configured": true, 00:14:10.173 "data_offset": 2048, 00:14:10.173 "data_size": 63488 00:14:10.173 }, 00:14:10.173 { 00:14:10.173 "name": "pt3", 00:14:10.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:10.173 "is_configured": true, 00:14:10.173 "data_offset": 2048, 00:14:10.173 "data_size": 63488 00:14:10.173 }, 00:14:10.173 { 00:14:10.173 "name": null, 00:14:10.173 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:10.173 "is_configured": false, 00:14:10.173 "data_offset": 2048, 00:14:10.174 "data_size": 63488 00:14:10.174 } 00:14:10.174 ] 00:14:10.174 }' 00:14:10.174 20:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.174 20:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.741 [2024-10-17 20:10:56.167794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:10.741 [2024-10-17 20:10:56.167893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.741 [2024-10-17 20:10:56.167929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:10.741 [2024-10-17 20:10:56.167945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.741 [2024-10-17 20:10:56.168573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.741 [2024-10-17 20:10:56.168605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:10.741 [2024-10-17 20:10:56.168712] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:10.741 [2024-10-17 20:10:56.168776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:10.741 [2024-10-17 20:10:56.168975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:10.741 [2024-10-17 20:10:56.168990] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:10.741 [2024-10-17 20:10:56.169366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:10.741 [2024-10-17 20:10:56.169558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:10.741 [2024-10-17 20:10:56.169579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:10.741 [2024-10-17 20:10:56.169785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.741 pt4 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.741 "name": "raid_bdev1", 00:14:10.741 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:10.741 "strip_size_kb": 0, 00:14:10.741 "state": "online", 00:14:10.741 "raid_level": "raid1", 00:14:10.741 "superblock": true, 00:14:10.741 "num_base_bdevs": 4, 00:14:10.741 "num_base_bdevs_discovered": 3, 00:14:10.741 "num_base_bdevs_operational": 3, 00:14:10.741 "base_bdevs_list": [ 00:14:10.741 { 00:14:10.741 "name": null, 00:14:10.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.741 "is_configured": false, 00:14:10.741 "data_offset": 2048, 00:14:10.741 "data_size": 63488 00:14:10.741 }, 00:14:10.741 { 00:14:10.741 "name": "pt2", 00:14:10.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:10.741 "is_configured": true, 00:14:10.741 "data_offset": 2048, 00:14:10.741 "data_size": 63488 00:14:10.741 }, 00:14:10.741 { 00:14:10.741 "name": "pt3", 00:14:10.741 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:10.741 "is_configured": true, 00:14:10.741 "data_offset": 2048, 00:14:10.741 "data_size": 63488 00:14:10.741 }, 00:14:10.741 { 00:14:10.741 "name": "pt4", 00:14:10.741 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:10.741 "is_configured": true, 00:14:10.741 "data_offset": 2048, 00:14:10.741 "data_size": 63488 00:14:10.741 } 00:14:10.741 ] 00:14:10.741 }' 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.741 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.322 [2024-10-17 20:10:56.687885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.322 [2024-10-17 20:10:56.687917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.322 [2024-10-17 20:10:56.688065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.322 [2024-10-17 20:10:56.688166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.322 [2024-10-17 20:10:56.688187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.322 [2024-10-17 20:10:56.759893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:11.322 [2024-10-17 20:10:56.759980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.322 [2024-10-17 20:10:56.760023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:11.322 [2024-10-17 20:10:56.760064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.322 [2024-10-17 20:10:56.763063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.322 [2024-10-17 20:10:56.763147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:11.322 [2024-10-17 20:10:56.763250] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:11.322 [2024-10-17 20:10:56.763312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:11.322 [2024-10-17 20:10:56.763513] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:11.322 [2024-10-17 20:10:56.763535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.322 [2024-10-17 20:10:56.763556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:11.322 [2024-10-17 20:10:56.763633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:11.322 [2024-10-17 20:10:56.763789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:11.322 pt1 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.322 "name": "raid_bdev1", 00:14:11.322 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:11.322 "strip_size_kb": 0, 00:14:11.322 "state": "configuring", 00:14:11.322 "raid_level": "raid1", 00:14:11.322 "superblock": true, 00:14:11.322 "num_base_bdevs": 4, 00:14:11.322 "num_base_bdevs_discovered": 2, 00:14:11.322 "num_base_bdevs_operational": 3, 00:14:11.322 "base_bdevs_list": [ 00:14:11.322 { 00:14:11.322 "name": null, 00:14:11.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.322 "is_configured": false, 00:14:11.322 "data_offset": 2048, 00:14:11.322 "data_size": 63488 00:14:11.322 }, 00:14:11.322 { 00:14:11.322 "name": "pt2", 00:14:11.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:11.322 "is_configured": true, 00:14:11.322 "data_offset": 2048, 00:14:11.322 "data_size": 63488 00:14:11.322 }, 00:14:11.322 { 00:14:11.322 "name": "pt3", 00:14:11.322 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:11.322 "is_configured": true, 00:14:11.322 "data_offset": 2048, 00:14:11.322 "data_size": 63488 00:14:11.322 }, 00:14:11.322 { 00:14:11.322 "name": null, 00:14:11.322 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:11.322 "is_configured": false, 00:14:11.322 "data_offset": 2048, 00:14:11.322 "data_size": 63488 00:14:11.322 } 00:14:11.322 ] 00:14:11.322 }' 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.322 20:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.899 [2024-10-17 20:10:57.332179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:11.899 [2024-10-17 20:10:57.332428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.899 [2024-10-17 20:10:57.332474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:11.899 [2024-10-17 20:10:57.332491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.899 [2024-10-17 20:10:57.333100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.899 [2024-10-17 20:10:57.333124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:11.899 [2024-10-17 20:10:57.333245] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:11.899 [2024-10-17 20:10:57.333283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:11.899 [2024-10-17 20:10:57.333479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:11.899 [2024-10-17 20:10:57.333494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:11.899 [2024-10-17 20:10:57.333826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:11.899 [2024-10-17 20:10:57.334028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:11.899 [2024-10-17 20:10:57.334055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:11.899 [2024-10-17 20:10:57.334258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.899 pt4 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.899 "name": "raid_bdev1", 00:14:11.899 "uuid": "13236ed8-b9e9-40bb-b999-c26dbdfb9eaf", 00:14:11.899 "strip_size_kb": 0, 00:14:11.899 "state": "online", 00:14:11.899 "raid_level": "raid1", 00:14:11.899 "superblock": true, 00:14:11.899 "num_base_bdevs": 4, 00:14:11.899 "num_base_bdevs_discovered": 3, 00:14:11.899 "num_base_bdevs_operational": 3, 00:14:11.899 "base_bdevs_list": [ 00:14:11.899 { 00:14:11.899 "name": null, 00:14:11.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.899 "is_configured": false, 00:14:11.899 "data_offset": 2048, 00:14:11.899 "data_size": 63488 00:14:11.899 }, 00:14:11.899 { 00:14:11.899 "name": "pt2", 00:14:11.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:11.899 "is_configured": true, 00:14:11.899 "data_offset": 2048, 00:14:11.899 "data_size": 63488 00:14:11.899 }, 00:14:11.899 { 00:14:11.899 "name": "pt3", 00:14:11.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:11.899 "is_configured": true, 00:14:11.899 "data_offset": 2048, 00:14:11.899 "data_size": 63488 00:14:11.899 }, 00:14:11.899 { 00:14:11.899 "name": "pt4", 00:14:11.899 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:11.899 "is_configured": true, 00:14:11.899 "data_offset": 2048, 00:14:11.899 "data_size": 63488 00:14:11.899 } 00:14:11.899 ] 00:14:11.899 }' 00:14:11.899 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.900 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.466 [2024-10-17 20:10:57.896709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 13236ed8-b9e9-40bb-b999-c26dbdfb9eaf '!=' 13236ed8-b9e9-40bb-b999-c26dbdfb9eaf ']' 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74594 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74594 ']' 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74594 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74594 00:14:12.466 killing process with pid 74594 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74594' 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74594 00:14:12.466 [2024-10-17 20:10:57.961580] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.466 20:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74594 00:14:12.466 [2024-10-17 20:10:57.961691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.466 [2024-10-17 20:10:57.961781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.466 [2024-10-17 20:10:57.961801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:12.723 [2024-10-17 20:10:58.291077] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:13.658 20:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:13.658 00:14:13.658 real 0m9.360s 00:14:13.658 user 0m15.455s 00:14:13.658 sys 0m1.344s 00:14:13.658 20:10:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:13.658 20:10:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.658 ************************************ 00:14:13.658 END TEST raid_superblock_test 00:14:13.658 ************************************ 00:14:13.917 20:10:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:14:13.917 20:10:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:13.917 20:10:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:13.917 20:10:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:13.917 ************************************ 00:14:13.917 START TEST raid_read_error_test 00:14:13.917 ************************************ 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5mRqfRskwg 00:14:13.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75087 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75087 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75087 ']' 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:13.917 20:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.917 [2024-10-17 20:10:59.453934] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:14:13.917 [2024-10-17 20:10:59.454149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75087 ] 00:14:14.176 [2024-10-17 20:10:59.628520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.176 [2024-10-17 20:10:59.758258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.435 [2024-10-17 20:10:59.961228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.435 [2024-10-17 20:10:59.961296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 BaseBdev1_malloc 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 true 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 [2024-10-17 20:11:00.481844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:15.003 [2024-10-17 20:11:00.481931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.003 [2024-10-17 20:11:00.481967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:15.003 [2024-10-17 20:11:00.481984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.003 [2024-10-17 20:11:00.485041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.003 [2024-10-17 20:11:00.485261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:15.003 BaseBdev1 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 BaseBdev2_malloc 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 true 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 [2024-10-17 20:11:00.546277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:15.003 [2024-10-17 20:11:00.546378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.003 [2024-10-17 20:11:00.546402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:15.003 [2024-10-17 20:11:00.546427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.003 [2024-10-17 20:11:00.549802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.003 [2024-10-17 20:11:00.549866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:15.003 BaseBdev2 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 BaseBdev3_malloc 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 true 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 [2024-10-17 20:11:00.624583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:15.003 [2024-10-17 20:11:00.624862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.003 [2024-10-17 20:11:00.624897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:15.003 [2024-10-17 20:11:00.624916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.003 [2024-10-17 20:11:00.627663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.003 [2024-10-17 20:11:00.627707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:15.003 BaseBdev3 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.262 BaseBdev4_malloc 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.262 true 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.262 [2024-10-17 20:11:00.688419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:15.262 [2024-10-17 20:11:00.688527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.262 [2024-10-17 20:11:00.688553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:15.262 [2024-10-17 20:11:00.688571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.262 [2024-10-17 20:11:00.691390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.262 [2024-10-17 20:11:00.691441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:15.262 BaseBdev4 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.262 [2024-10-17 20:11:00.696503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.262 [2024-10-17 20:11:00.699283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.262 [2024-10-17 20:11:00.699556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.262 [2024-10-17 20:11:00.699791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:15.262 [2024-10-17 20:11:00.700248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:15.262 [2024-10-17 20:11:00.700403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:15.262 [2024-10-17 20:11:00.700749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:15.262 [2024-10-17 20:11:00.701027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:15.262 [2024-10-17 20:11:00.701042] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:15.262 [2024-10-17 20:11:00.701430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.262 "name": "raid_bdev1", 00:14:15.262 "uuid": "9aa54fb0-d091-4305-bc61-210a58b0d8ab", 00:14:15.262 "strip_size_kb": 0, 00:14:15.262 "state": "online", 00:14:15.262 "raid_level": "raid1", 00:14:15.262 "superblock": true, 00:14:15.262 "num_base_bdevs": 4, 00:14:15.262 "num_base_bdevs_discovered": 4, 00:14:15.262 "num_base_bdevs_operational": 4, 00:14:15.262 "base_bdevs_list": [ 00:14:15.262 { 00:14:15.262 "name": "BaseBdev1", 00:14:15.262 "uuid": "ae60e896-44b2-559b-8b21-c3c3687092b5", 00:14:15.262 "is_configured": true, 00:14:15.262 "data_offset": 2048, 00:14:15.262 "data_size": 63488 00:14:15.262 }, 00:14:15.262 { 00:14:15.262 "name": "BaseBdev2", 00:14:15.262 "uuid": "52fc3265-d21c-5ef3-8f9e-a33c40c340a2", 00:14:15.262 "is_configured": true, 00:14:15.262 "data_offset": 2048, 00:14:15.262 "data_size": 63488 00:14:15.262 }, 00:14:15.262 { 00:14:15.262 "name": "BaseBdev3", 00:14:15.262 "uuid": "5f72698c-aa28-53d4-a4e5-bbd8e5c80181", 00:14:15.262 "is_configured": true, 00:14:15.262 "data_offset": 2048, 00:14:15.262 "data_size": 63488 00:14:15.262 }, 00:14:15.262 { 00:14:15.262 "name": "BaseBdev4", 00:14:15.262 "uuid": "a93aa74e-bb98-5c5b-9544-3720dfbac539", 00:14:15.262 "is_configured": true, 00:14:15.262 "data_offset": 2048, 00:14:15.262 "data_size": 63488 00:14:15.262 } 00:14:15.262 ] 00:14:15.262 }' 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.262 20:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.829 20:11:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:15.829 20:11:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:15.829 [2024-10-17 20:11:01.311154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.765 "name": "raid_bdev1", 00:14:16.765 "uuid": "9aa54fb0-d091-4305-bc61-210a58b0d8ab", 00:14:16.765 "strip_size_kb": 0, 00:14:16.765 "state": "online", 00:14:16.765 "raid_level": "raid1", 00:14:16.765 "superblock": true, 00:14:16.765 "num_base_bdevs": 4, 00:14:16.765 "num_base_bdevs_discovered": 4, 00:14:16.765 "num_base_bdevs_operational": 4, 00:14:16.765 "base_bdevs_list": [ 00:14:16.765 { 00:14:16.765 "name": "BaseBdev1", 00:14:16.765 "uuid": "ae60e896-44b2-559b-8b21-c3c3687092b5", 00:14:16.765 "is_configured": true, 00:14:16.765 "data_offset": 2048, 00:14:16.765 "data_size": 63488 00:14:16.765 }, 00:14:16.765 { 00:14:16.765 "name": "BaseBdev2", 00:14:16.765 "uuid": "52fc3265-d21c-5ef3-8f9e-a33c40c340a2", 00:14:16.765 "is_configured": true, 00:14:16.765 "data_offset": 2048, 00:14:16.765 "data_size": 63488 00:14:16.765 }, 00:14:16.765 { 00:14:16.765 "name": "BaseBdev3", 00:14:16.765 "uuid": "5f72698c-aa28-53d4-a4e5-bbd8e5c80181", 00:14:16.765 "is_configured": true, 00:14:16.765 "data_offset": 2048, 00:14:16.765 "data_size": 63488 00:14:16.765 }, 00:14:16.765 { 00:14:16.765 "name": "BaseBdev4", 00:14:16.765 "uuid": "a93aa74e-bb98-5c5b-9544-3720dfbac539", 00:14:16.765 "is_configured": true, 00:14:16.765 "data_offset": 2048, 00:14:16.765 "data_size": 63488 00:14:16.765 } 00:14:16.765 ] 00:14:16.765 }' 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.765 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.331 [2024-10-17 20:11:02.747216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.331 [2024-10-17 20:11:02.747254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.331 [2024-10-17 20:11:02.750806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.331 [2024-10-17 20:11:02.751060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.331 [2024-10-17 20:11:02.751342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.331 [2024-10-17 20:11:02.751575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:17.331 { 00:14:17.331 "results": [ 00:14:17.331 { 00:14:17.331 "job": "raid_bdev1", 00:14:17.331 "core_mask": "0x1", 00:14:17.331 "workload": "randrw", 00:14:17.331 "percentage": 50, 00:14:17.331 "status": "finished", 00:14:17.331 "queue_depth": 1, 00:14:17.331 "io_size": 131072, 00:14:17.331 "runtime": 1.433372, 00:14:17.331 "iops": 7799.789587071605, 00:14:17.331 "mibps": 974.9736983839506, 00:14:17.331 "io_failed": 0, 00:14:17.331 "io_timeout": 0, 00:14:17.331 "avg_latency_us": 124.23420003252562, 00:14:17.331 "min_latency_us": 37.93454545454546, 00:14:17.331 "max_latency_us": 2040.5527272727272 00:14:17.331 } 00:14:17.331 ], 00:14:17.331 "core_count": 1 00:14:17.331 } 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75087 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75087 ']' 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75087 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75087 00:14:17.331 killing process with pid 75087 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75087' 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75087 00:14:17.331 [2024-10-17 20:11:02.794838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.331 20:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75087 00:14:17.589 [2024-10-17 20:11:03.073936] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.524 20:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:18.524 20:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5mRqfRskwg 00:14:18.524 20:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:18.524 20:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:18.524 20:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:18.524 20:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:18.524 20:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:18.524 20:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:18.524 00:14:18.524 real 0m4.806s 00:14:18.524 user 0m5.888s 00:14:18.524 sys 0m0.613s 00:14:18.524 20:11:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.524 ************************************ 00:14:18.524 END TEST raid_read_error_test 00:14:18.524 ************************************ 00:14:18.524 20:11:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.783 20:11:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:14:18.783 20:11:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:18.783 20:11:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.783 20:11:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.783 ************************************ 00:14:18.783 START TEST raid_write_error_test 00:14:18.783 ************************************ 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:18.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.chFVXQErjW 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75237 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75237 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75237 ']' 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.783 20:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.783 [2024-10-17 20:11:04.318041] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:14:18.783 [2024-10-17 20:11:04.318404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75237 ] 00:14:19.042 [2024-10-17 20:11:04.496376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.042 [2024-10-17 20:11:04.631399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.300 [2024-10-17 20:11:04.832533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.300 [2024-10-17 20:11:04.832888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.867 BaseBdev1_malloc 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.867 true 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.867 [2024-10-17 20:11:05.330355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:19.867 [2024-10-17 20:11:05.330601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.867 [2024-10-17 20:11:05.330641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:19.867 [2024-10-17 20:11:05.330661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.867 [2024-10-17 20:11:05.333563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.867 [2024-10-17 20:11:05.333760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:19.867 BaseBdev1 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.867 BaseBdev2_malloc 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.867 true 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.867 [2024-10-17 20:11:05.400481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:19.867 [2024-10-17 20:11:05.400554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.867 [2024-10-17 20:11:05.400581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:19.867 [2024-10-17 20:11:05.400599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.867 [2024-10-17 20:11:05.403543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.867 [2024-10-17 20:11:05.403717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:19.867 BaseBdev2 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.867 BaseBdev3_malloc 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.867 true 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.867 [2024-10-17 20:11:05.481705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:19.867 [2024-10-17 20:11:05.481782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.867 [2024-10-17 20:11:05.481829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:19.867 [2024-10-17 20:11:05.481850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.867 [2024-10-17 20:11:05.485069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.867 [2024-10-17 20:11:05.485130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:19.867 BaseBdev3 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.867 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.126 BaseBdev4_malloc 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.126 true 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.126 [2024-10-17 20:11:05.544190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:20.126 [2024-10-17 20:11:05.544440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.126 [2024-10-17 20:11:05.544475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:20.126 [2024-10-17 20:11:05.544494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.126 [2024-10-17 20:11:05.547348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.126 [2024-10-17 20:11:05.547426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:20.126 BaseBdev4 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.126 [2024-10-17 20:11:05.552290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.126 [2024-10-17 20:11:05.554874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.126 [2024-10-17 20:11:05.554985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.126 [2024-10-17 20:11:05.555124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:20.126 [2024-10-17 20:11:05.555466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:20.126 [2024-10-17 20:11:05.555488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:20.126 [2024-10-17 20:11:05.555782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:20.126 [2024-10-17 20:11:05.556084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:20.126 [2024-10-17 20:11:05.556101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:20.126 [2024-10-17 20:11:05.556349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.126 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.127 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.127 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.127 "name": "raid_bdev1", 00:14:20.127 "uuid": "2c14910c-e5fa-45c8-9a4b-fb2836b37d6e", 00:14:20.127 "strip_size_kb": 0, 00:14:20.127 "state": "online", 00:14:20.127 "raid_level": "raid1", 00:14:20.127 "superblock": true, 00:14:20.127 "num_base_bdevs": 4, 00:14:20.127 "num_base_bdevs_discovered": 4, 00:14:20.127 "num_base_bdevs_operational": 4, 00:14:20.127 "base_bdevs_list": [ 00:14:20.127 { 00:14:20.127 "name": "BaseBdev1", 00:14:20.127 "uuid": "d9814161-afee-5a93-82ac-4821d4014bae", 00:14:20.127 "is_configured": true, 00:14:20.127 "data_offset": 2048, 00:14:20.127 "data_size": 63488 00:14:20.127 }, 00:14:20.127 { 00:14:20.127 "name": "BaseBdev2", 00:14:20.127 "uuid": "d695a02b-3b72-59e2-9ecf-78be9f7d2e43", 00:14:20.127 "is_configured": true, 00:14:20.127 "data_offset": 2048, 00:14:20.127 "data_size": 63488 00:14:20.127 }, 00:14:20.127 { 00:14:20.127 "name": "BaseBdev3", 00:14:20.127 "uuid": "8c3e18ef-2463-546b-aaba-5d092df65cb6", 00:14:20.127 "is_configured": true, 00:14:20.127 "data_offset": 2048, 00:14:20.127 "data_size": 63488 00:14:20.127 }, 00:14:20.127 { 00:14:20.127 "name": "BaseBdev4", 00:14:20.127 "uuid": "35ccf7fd-2395-52e1-8093-52d97eedd7ca", 00:14:20.127 "is_configured": true, 00:14:20.127 "data_offset": 2048, 00:14:20.127 "data_size": 63488 00:14:20.127 } 00:14:20.127 ] 00:14:20.127 }' 00:14:20.127 20:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.127 20:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.694 20:11:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:20.694 20:11:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:20.694 [2024-10-17 20:11:06.181824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.630 [2024-10-17 20:11:07.062478] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:21.630 [2024-10-17 20:11:07.062556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.630 [2024-10-17 20:11:07.062846] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.630 "name": "raid_bdev1", 00:14:21.630 "uuid": "2c14910c-e5fa-45c8-9a4b-fb2836b37d6e", 00:14:21.630 "strip_size_kb": 0, 00:14:21.630 "state": "online", 00:14:21.630 "raid_level": "raid1", 00:14:21.630 "superblock": true, 00:14:21.630 "num_base_bdevs": 4, 00:14:21.630 "num_base_bdevs_discovered": 3, 00:14:21.630 "num_base_bdevs_operational": 3, 00:14:21.630 "base_bdevs_list": [ 00:14:21.630 { 00:14:21.630 "name": null, 00:14:21.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.630 "is_configured": false, 00:14:21.630 "data_offset": 0, 00:14:21.630 "data_size": 63488 00:14:21.630 }, 00:14:21.630 { 00:14:21.630 "name": "BaseBdev2", 00:14:21.630 "uuid": "d695a02b-3b72-59e2-9ecf-78be9f7d2e43", 00:14:21.630 "is_configured": true, 00:14:21.630 "data_offset": 2048, 00:14:21.630 "data_size": 63488 00:14:21.630 }, 00:14:21.630 { 00:14:21.630 "name": "BaseBdev3", 00:14:21.630 "uuid": "8c3e18ef-2463-546b-aaba-5d092df65cb6", 00:14:21.630 "is_configured": true, 00:14:21.630 "data_offset": 2048, 00:14:21.630 "data_size": 63488 00:14:21.630 }, 00:14:21.630 { 00:14:21.630 "name": "BaseBdev4", 00:14:21.630 "uuid": "35ccf7fd-2395-52e1-8093-52d97eedd7ca", 00:14:21.630 "is_configured": true, 00:14:21.630 "data_offset": 2048, 00:14:21.630 "data_size": 63488 00:14:21.630 } 00:14:21.630 ] 00:14:21.630 }' 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.630 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.211 [2024-10-17 20:11:07.590434] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:22.211 [2024-10-17 20:11:07.590625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.211 [2024-10-17 20:11:07.594203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.211 [2024-10-17 20:11:07.594380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.211 [2024-10-17 20:11:07.594667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:14:22.211 "results": [ 00:14:22.211 { 00:14:22.211 "job": "raid_bdev1", 00:14:22.211 "core_mask": "0x1", 00:14:22.211 "workload": "randrw", 00:14:22.211 "percentage": 50, 00:14:22.211 "status": "finished", 00:14:22.211 "queue_depth": 1, 00:14:22.211 "io_size": 131072, 00:14:22.211 "runtime": 1.406422, 00:14:22.211 "iops": 8341.735268646253, 00:14:22.211 "mibps": 1042.7169085807816, 00:14:22.211 "io_failed": 0, 00:14:22.211 "io_timeout": 0, 00:14:22.211 "avg_latency_us": 115.79173666429037, 00:14:22.211 "min_latency_us": 35.60727272727273, 00:14:22.211 "max_latency_us": 2070.3418181818183 00:14:22.211 } 00:14:22.211 ], 00:14:22.211 "core_count": 1 00:14:22.211 } 00:14:22.211 ee all in destruct 00:14:22.211 [2024-10-17 20:11:07.594797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75237 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75237 ']' 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75237 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75237 00:14:22.211 killing process with pid 75237 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75237' 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75237 00:14:22.211 [2024-10-17 20:11:07.632977] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.211 20:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75237 00:14:22.469 [2024-10-17 20:11:07.918511] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:23.402 20:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.chFVXQErjW 00:14:23.402 20:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:23.402 20:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:23.402 20:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:23.402 20:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:23.402 ************************************ 00:14:23.402 END TEST raid_write_error_test 00:14:23.402 ************************************ 00:14:23.402 20:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:23.402 20:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:23.402 20:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:23.402 00:14:23.402 real 0m4.782s 00:14:23.402 user 0m5.834s 00:14:23.402 sys 0m0.621s 00:14:23.402 20:11:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:23.402 20:11:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.402 20:11:09 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:14:23.402 20:11:09 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:23.402 20:11:09 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:14:23.402 20:11:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:23.402 20:11:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:23.402 20:11:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:23.402 ************************************ 00:14:23.402 START TEST raid_rebuild_test 00:14:23.402 ************************************ 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:23.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75382 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75382 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75382 ']' 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.402 20:11:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.403 20:11:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.661 [2024-10-17 20:11:09.147539] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:14:23.661 [2024-10-17 20:11:09.148124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75382 ] 00:14:23.661 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:23.661 Zero copy mechanism will not be used. 00:14:23.920 [2024-10-17 20:11:09.324397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.920 [2024-10-17 20:11:09.453884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.179 [2024-10-17 20:11:09.656414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.179 [2024-10-17 20:11:09.656661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.745 BaseBdev1_malloc 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.745 [2024-10-17 20:11:10.167346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:24.745 [2024-10-17 20:11:10.167494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.745 [2024-10-17 20:11:10.167527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:24.745 [2024-10-17 20:11:10.167544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.745 [2024-10-17 20:11:10.170527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.745 [2024-10-17 20:11:10.170592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:24.745 BaseBdev1 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.745 BaseBdev2_malloc 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.745 [2024-10-17 20:11:10.223392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:24.745 [2024-10-17 20:11:10.223494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.745 [2024-10-17 20:11:10.223539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:24.745 [2024-10-17 20:11:10.223557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.745 [2024-10-17 20:11:10.226360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.745 [2024-10-17 20:11:10.226422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:24.745 BaseBdev2 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.745 spare_malloc 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.745 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.746 spare_delay 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.746 [2024-10-17 20:11:10.299665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:24.746 [2024-10-17 20:11:10.299738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.746 [2024-10-17 20:11:10.299768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:24.746 [2024-10-17 20:11:10.299785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.746 [2024-10-17 20:11:10.302691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.746 [2024-10-17 20:11:10.302756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:24.746 spare 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.746 [2024-10-17 20:11:10.307718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.746 [2024-10-17 20:11:10.310350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.746 [2024-10-17 20:11:10.310630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:24.746 [2024-10-17 20:11:10.310764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:24.746 [2024-10-17 20:11:10.311161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:24.746 [2024-10-17 20:11:10.311489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:24.746 [2024-10-17 20:11:10.311616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:24.746 [2024-10-17 20:11:10.312061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.746 "name": "raid_bdev1", 00:14:24.746 "uuid": "7816792e-7f1d-4b1a-a86f-69b7f2eb918d", 00:14:24.746 "strip_size_kb": 0, 00:14:24.746 "state": "online", 00:14:24.746 "raid_level": "raid1", 00:14:24.746 "superblock": false, 00:14:24.746 "num_base_bdevs": 2, 00:14:24.746 "num_base_bdevs_discovered": 2, 00:14:24.746 "num_base_bdevs_operational": 2, 00:14:24.746 "base_bdevs_list": [ 00:14:24.746 { 00:14:24.746 "name": "BaseBdev1", 00:14:24.746 "uuid": "23c346a5-9883-5464-ac78-358f5fabc8af", 00:14:24.746 "is_configured": true, 00:14:24.746 "data_offset": 0, 00:14:24.746 "data_size": 65536 00:14:24.746 }, 00:14:24.746 { 00:14:24.746 "name": "BaseBdev2", 00:14:24.746 "uuid": "0c1a8d02-9be6-5d13-bda4-54d056537a65", 00:14:24.746 "is_configured": true, 00:14:24.746 "data_offset": 0, 00:14:24.746 "data_size": 65536 00:14:24.746 } 00:14:24.746 ] 00:14:24.746 }' 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.746 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.312 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:25.312 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:25.313 [2024-10-17 20:11:10.840638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.313 20:11:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:25.889 [2024-10-17 20:11:11.240549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:25.889 /dev/nbd0 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.889 1+0 records in 00:14:25.889 1+0 records out 00:14:25.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347119 s, 11.8 MB/s 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:25.889 20:11:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:32.451 65536+0 records in 00:14:32.451 65536+0 records out 00:14:32.451 33554432 bytes (34 MB, 32 MiB) copied, 5.93606 s, 5.7 MB/s 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.451 [2024-10-17 20:11:17.586839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.451 [2024-10-17 20:11:17.598968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.451 "name": "raid_bdev1", 00:14:32.451 "uuid": "7816792e-7f1d-4b1a-a86f-69b7f2eb918d", 00:14:32.451 "strip_size_kb": 0, 00:14:32.451 "state": "online", 00:14:32.451 "raid_level": "raid1", 00:14:32.451 "superblock": false, 00:14:32.451 "num_base_bdevs": 2, 00:14:32.451 "num_base_bdevs_discovered": 1, 00:14:32.451 "num_base_bdevs_operational": 1, 00:14:32.451 "base_bdevs_list": [ 00:14:32.451 { 00:14:32.451 "name": null, 00:14:32.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.451 "is_configured": false, 00:14:32.451 "data_offset": 0, 00:14:32.451 "data_size": 65536 00:14:32.451 }, 00:14:32.451 { 00:14:32.451 "name": "BaseBdev2", 00:14:32.451 "uuid": "0c1a8d02-9be6-5d13-bda4-54d056537a65", 00:14:32.451 "is_configured": true, 00:14:32.451 "data_offset": 0, 00:14:32.451 "data_size": 65536 00:14:32.451 } 00:14:32.451 ] 00:14:32.451 }' 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.451 20:11:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.451 20:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:32.451 20:11:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.451 20:11:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.451 [2024-10-17 20:11:18.091178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.710 [2024-10-17 20:11:18.107806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:32.710 20:11:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.710 20:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:32.710 [2024-10-17 20:11:18.110499] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.644 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.644 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.644 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.644 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.644 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.644 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.644 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.644 20:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.644 20:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.644 20:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.644 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.644 "name": "raid_bdev1", 00:14:33.644 "uuid": "7816792e-7f1d-4b1a-a86f-69b7f2eb918d", 00:14:33.644 "strip_size_kb": 0, 00:14:33.644 "state": "online", 00:14:33.644 "raid_level": "raid1", 00:14:33.644 "superblock": false, 00:14:33.644 "num_base_bdevs": 2, 00:14:33.644 "num_base_bdevs_discovered": 2, 00:14:33.644 "num_base_bdevs_operational": 2, 00:14:33.644 "process": { 00:14:33.644 "type": "rebuild", 00:14:33.644 "target": "spare", 00:14:33.644 "progress": { 00:14:33.644 "blocks": 20480, 00:14:33.644 "percent": 31 00:14:33.644 } 00:14:33.645 }, 00:14:33.645 "base_bdevs_list": [ 00:14:33.645 { 00:14:33.645 "name": "spare", 00:14:33.645 "uuid": "b5a2ef9d-596a-5629-a514-fc757c252892", 00:14:33.645 "is_configured": true, 00:14:33.645 "data_offset": 0, 00:14:33.645 "data_size": 65536 00:14:33.645 }, 00:14:33.645 { 00:14:33.645 "name": "BaseBdev2", 00:14:33.645 "uuid": "0c1a8d02-9be6-5d13-bda4-54d056537a65", 00:14:33.645 "is_configured": true, 00:14:33.645 "data_offset": 0, 00:14:33.645 "data_size": 65536 00:14:33.645 } 00:14:33.645 ] 00:14:33.645 }' 00:14:33.645 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.645 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.645 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.645 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.645 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:33.645 20:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.645 20:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.645 [2024-10-17 20:11:19.284316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.903 [2024-10-17 20:11:19.320374] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:33.903 [2024-10-17 20:11:19.320515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.903 [2024-10-17 20:11:19.320538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.903 [2024-10-17 20:11:19.320552] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.903 "name": "raid_bdev1", 00:14:33.903 "uuid": "7816792e-7f1d-4b1a-a86f-69b7f2eb918d", 00:14:33.903 "strip_size_kb": 0, 00:14:33.903 "state": "online", 00:14:33.903 "raid_level": "raid1", 00:14:33.903 "superblock": false, 00:14:33.903 "num_base_bdevs": 2, 00:14:33.903 "num_base_bdevs_discovered": 1, 00:14:33.903 "num_base_bdevs_operational": 1, 00:14:33.903 "base_bdevs_list": [ 00:14:33.903 { 00:14:33.903 "name": null, 00:14:33.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.903 "is_configured": false, 00:14:33.903 "data_offset": 0, 00:14:33.903 "data_size": 65536 00:14:33.903 }, 00:14:33.903 { 00:14:33.903 "name": "BaseBdev2", 00:14:33.903 "uuid": "0c1a8d02-9be6-5d13-bda4-54d056537a65", 00:14:33.903 "is_configured": true, 00:14:33.903 "data_offset": 0, 00:14:33.903 "data_size": 65536 00:14:33.903 } 00:14:33.903 ] 00:14:33.903 }' 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.903 20:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.470 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.470 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.470 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.470 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.470 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.470 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.470 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.470 20:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.470 20:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.470 20:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.470 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.470 "name": "raid_bdev1", 00:14:34.470 "uuid": "7816792e-7f1d-4b1a-a86f-69b7f2eb918d", 00:14:34.470 "strip_size_kb": 0, 00:14:34.470 "state": "online", 00:14:34.470 "raid_level": "raid1", 00:14:34.470 "superblock": false, 00:14:34.470 "num_base_bdevs": 2, 00:14:34.470 "num_base_bdevs_discovered": 1, 00:14:34.470 "num_base_bdevs_operational": 1, 00:14:34.470 "base_bdevs_list": [ 00:14:34.470 { 00:14:34.470 "name": null, 00:14:34.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.470 "is_configured": false, 00:14:34.470 "data_offset": 0, 00:14:34.470 "data_size": 65536 00:14:34.470 }, 00:14:34.470 { 00:14:34.470 "name": "BaseBdev2", 00:14:34.470 "uuid": "0c1a8d02-9be6-5d13-bda4-54d056537a65", 00:14:34.470 "is_configured": true, 00:14:34.470 "data_offset": 0, 00:14:34.470 "data_size": 65536 00:14:34.470 } 00:14:34.470 ] 00:14:34.470 }' 00:14:34.470 20:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.470 20:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.470 20:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.470 20:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.470 20:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:34.470 20:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.470 20:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.470 [2024-10-17 20:11:20.065775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.470 [2024-10-17 20:11:20.081768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:34.470 20:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.470 20:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:34.470 [2024-10-17 20:11:20.084274] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.850 "name": "raid_bdev1", 00:14:35.850 "uuid": "7816792e-7f1d-4b1a-a86f-69b7f2eb918d", 00:14:35.850 "strip_size_kb": 0, 00:14:35.850 "state": "online", 00:14:35.850 "raid_level": "raid1", 00:14:35.850 "superblock": false, 00:14:35.850 "num_base_bdevs": 2, 00:14:35.850 "num_base_bdevs_discovered": 2, 00:14:35.850 "num_base_bdevs_operational": 2, 00:14:35.850 "process": { 00:14:35.850 "type": "rebuild", 00:14:35.850 "target": "spare", 00:14:35.850 "progress": { 00:14:35.850 "blocks": 20480, 00:14:35.850 "percent": 31 00:14:35.850 } 00:14:35.850 }, 00:14:35.850 "base_bdevs_list": [ 00:14:35.850 { 00:14:35.850 "name": "spare", 00:14:35.850 "uuid": "b5a2ef9d-596a-5629-a514-fc757c252892", 00:14:35.850 "is_configured": true, 00:14:35.850 "data_offset": 0, 00:14:35.850 "data_size": 65536 00:14:35.850 }, 00:14:35.850 { 00:14:35.850 "name": "BaseBdev2", 00:14:35.850 "uuid": "0c1a8d02-9be6-5d13-bda4-54d056537a65", 00:14:35.850 "is_configured": true, 00:14:35.850 "data_offset": 0, 00:14:35.850 "data_size": 65536 00:14:35.850 } 00:14:35.850 ] 00:14:35.850 }' 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=396 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.850 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.851 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.851 20:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.851 20:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.851 20:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.851 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.851 "name": "raid_bdev1", 00:14:35.851 "uuid": "7816792e-7f1d-4b1a-a86f-69b7f2eb918d", 00:14:35.851 "strip_size_kb": 0, 00:14:35.851 "state": "online", 00:14:35.851 "raid_level": "raid1", 00:14:35.851 "superblock": false, 00:14:35.851 "num_base_bdevs": 2, 00:14:35.851 "num_base_bdevs_discovered": 2, 00:14:35.851 "num_base_bdevs_operational": 2, 00:14:35.851 "process": { 00:14:35.851 "type": "rebuild", 00:14:35.851 "target": "spare", 00:14:35.851 "progress": { 00:14:35.851 "blocks": 22528, 00:14:35.851 "percent": 34 00:14:35.851 } 00:14:35.851 }, 00:14:35.851 "base_bdevs_list": [ 00:14:35.851 { 00:14:35.851 "name": "spare", 00:14:35.851 "uuid": "b5a2ef9d-596a-5629-a514-fc757c252892", 00:14:35.851 "is_configured": true, 00:14:35.851 "data_offset": 0, 00:14:35.851 "data_size": 65536 00:14:35.851 }, 00:14:35.851 { 00:14:35.851 "name": "BaseBdev2", 00:14:35.851 "uuid": "0c1a8d02-9be6-5d13-bda4-54d056537a65", 00:14:35.851 "is_configured": true, 00:14:35.851 "data_offset": 0, 00:14:35.851 "data_size": 65536 00:14:35.851 } 00:14:35.851 ] 00:14:35.851 }' 00:14:35.851 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.851 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.851 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.851 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.851 20:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.786 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.786 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.786 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.786 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.786 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.786 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.786 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.786 20:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.786 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.786 20:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.786 20:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.044 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.044 "name": "raid_bdev1", 00:14:37.044 "uuid": "7816792e-7f1d-4b1a-a86f-69b7f2eb918d", 00:14:37.045 "strip_size_kb": 0, 00:14:37.045 "state": "online", 00:14:37.045 "raid_level": "raid1", 00:14:37.045 "superblock": false, 00:14:37.045 "num_base_bdevs": 2, 00:14:37.045 "num_base_bdevs_discovered": 2, 00:14:37.045 "num_base_bdevs_operational": 2, 00:14:37.045 "process": { 00:14:37.045 "type": "rebuild", 00:14:37.045 "target": "spare", 00:14:37.045 "progress": { 00:14:37.045 "blocks": 47104, 00:14:37.045 "percent": 71 00:14:37.045 } 00:14:37.045 }, 00:14:37.045 "base_bdevs_list": [ 00:14:37.045 { 00:14:37.045 "name": "spare", 00:14:37.045 "uuid": "b5a2ef9d-596a-5629-a514-fc757c252892", 00:14:37.045 "is_configured": true, 00:14:37.045 "data_offset": 0, 00:14:37.045 "data_size": 65536 00:14:37.045 }, 00:14:37.045 { 00:14:37.045 "name": "BaseBdev2", 00:14:37.045 "uuid": "0c1a8d02-9be6-5d13-bda4-54d056537a65", 00:14:37.045 "is_configured": true, 00:14:37.045 "data_offset": 0, 00:14:37.045 "data_size": 65536 00:14:37.045 } 00:14:37.045 ] 00:14:37.045 }' 00:14:37.045 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.045 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.045 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.045 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.045 20:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.983 [2024-10-17 20:11:23.308657] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:37.983 [2024-10-17 20:11:23.308786] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:37.983 [2024-10-17 20:11:23.308865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.983 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.983 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.983 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.983 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.983 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.983 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.983 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.983 20:11:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.983 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.983 20:11:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.983 20:11:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.983 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.983 "name": "raid_bdev1", 00:14:37.983 "uuid": "7816792e-7f1d-4b1a-a86f-69b7f2eb918d", 00:14:37.983 "strip_size_kb": 0, 00:14:37.983 "state": "online", 00:14:37.983 "raid_level": "raid1", 00:14:37.983 "superblock": false, 00:14:37.983 "num_base_bdevs": 2, 00:14:37.983 "num_base_bdevs_discovered": 2, 00:14:37.983 "num_base_bdevs_operational": 2, 00:14:37.983 "base_bdevs_list": [ 00:14:37.983 { 00:14:37.983 "name": "spare", 00:14:37.983 "uuid": "b5a2ef9d-596a-5629-a514-fc757c252892", 00:14:37.983 "is_configured": true, 00:14:37.983 "data_offset": 0, 00:14:37.983 "data_size": 65536 00:14:37.983 }, 00:14:37.983 { 00:14:37.983 "name": "BaseBdev2", 00:14:37.983 "uuid": "0c1a8d02-9be6-5d13-bda4-54d056537a65", 00:14:37.983 "is_configured": true, 00:14:37.983 "data_offset": 0, 00:14:37.983 "data_size": 65536 00:14:37.983 } 00:14:37.983 ] 00:14:37.983 }' 00:14:37.983 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.242 "name": "raid_bdev1", 00:14:38.242 "uuid": "7816792e-7f1d-4b1a-a86f-69b7f2eb918d", 00:14:38.242 "strip_size_kb": 0, 00:14:38.242 "state": "online", 00:14:38.242 "raid_level": "raid1", 00:14:38.242 "superblock": false, 00:14:38.242 "num_base_bdevs": 2, 00:14:38.242 "num_base_bdevs_discovered": 2, 00:14:38.242 "num_base_bdevs_operational": 2, 00:14:38.242 "base_bdevs_list": [ 00:14:38.242 { 00:14:38.242 "name": "spare", 00:14:38.242 "uuid": "b5a2ef9d-596a-5629-a514-fc757c252892", 00:14:38.242 "is_configured": true, 00:14:38.242 "data_offset": 0, 00:14:38.242 "data_size": 65536 00:14:38.242 }, 00:14:38.242 { 00:14:38.242 "name": "BaseBdev2", 00:14:38.242 "uuid": "0c1a8d02-9be6-5d13-bda4-54d056537a65", 00:14:38.242 "is_configured": true, 00:14:38.242 "data_offset": 0, 00:14:38.242 "data_size": 65536 00:14:38.242 } 00:14:38.242 ] 00:14:38.242 }' 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.242 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.501 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.501 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.501 20:11:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.501 20:11:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.501 20:11:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.501 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.501 "name": "raid_bdev1", 00:14:38.501 "uuid": "7816792e-7f1d-4b1a-a86f-69b7f2eb918d", 00:14:38.501 "strip_size_kb": 0, 00:14:38.501 "state": "online", 00:14:38.501 "raid_level": "raid1", 00:14:38.501 "superblock": false, 00:14:38.501 "num_base_bdevs": 2, 00:14:38.501 "num_base_bdevs_discovered": 2, 00:14:38.501 "num_base_bdevs_operational": 2, 00:14:38.501 "base_bdevs_list": [ 00:14:38.501 { 00:14:38.501 "name": "spare", 00:14:38.501 "uuid": "b5a2ef9d-596a-5629-a514-fc757c252892", 00:14:38.501 "is_configured": true, 00:14:38.501 "data_offset": 0, 00:14:38.501 "data_size": 65536 00:14:38.501 }, 00:14:38.501 { 00:14:38.501 "name": "BaseBdev2", 00:14:38.501 "uuid": "0c1a8d02-9be6-5d13-bda4-54d056537a65", 00:14:38.501 "is_configured": true, 00:14:38.501 "data_offset": 0, 00:14:38.501 "data_size": 65536 00:14:38.501 } 00:14:38.501 ] 00:14:38.501 }' 00:14:38.501 20:11:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.501 20:11:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.067 [2024-10-17 20:11:24.422202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.067 [2024-10-17 20:11:24.422412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.067 [2024-10-17 20:11:24.422531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.067 [2024-10-17 20:11:24.422637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.067 [2024-10-17 20:11:24.422654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:39.067 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:39.326 /dev/nbd0 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.326 1+0 records in 00:14:39.326 1+0 records out 00:14:39.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599322 s, 6.8 MB/s 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:39.326 20:11:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:39.585 /dev/nbd1 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.585 1+0 records in 00:14:39.585 1+0 records out 00:14:39.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364137 s, 11.2 MB/s 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:39.585 20:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:39.843 20:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:39.843 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.843 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:39.843 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.843 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:39.843 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.843 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:40.102 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:40.102 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:40.102 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:40.102 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.102 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.102 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:40.102 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:40.102 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.102 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:40.102 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75382 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75382 ']' 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75382 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75382 00:14:40.360 killing process with pid 75382 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75382' 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75382 00:14:40.360 Received shutdown signal, test time was about 60.000000 seconds 00:14:40.360 00:14:40.360 Latency(us) 00:14:40.360 [2024-10-17T20:11:26.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.360 [2024-10-17T20:11:26.014Z] =================================================================================================================== 00:14:40.360 [2024-10-17T20:11:26.014Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:40.360 20:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75382 00:14:40.360 [2024-10-17 20:11:25.942924] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:40.619 [2024-10-17 20:11:26.190947] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:41.554 20:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:41.554 00:14:41.554 real 0m18.172s 00:14:41.554 user 0m21.028s 00:14:41.554 sys 0m3.344s 00:14:41.554 20:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:41.554 ************************************ 00:14:41.554 END TEST raid_rebuild_test 00:14:41.554 ************************************ 00:14:41.554 20:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.814 20:11:27 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:41.815 20:11:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:41.815 20:11:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:41.815 20:11:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:41.815 ************************************ 00:14:41.815 START TEST raid_rebuild_test_sb 00:14:41.815 ************************************ 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75823 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75823 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75823 ']' 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.815 20:11:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.815 [2024-10-17 20:11:27.377761] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:14:41.815 [2024-10-17 20:11:27.378260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75823 ] 00:14:41.815 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:41.815 Zero copy mechanism will not be used. 00:14:42.073 [2024-10-17 20:11:27.554633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.073 [2024-10-17 20:11:27.676181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.331 [2024-10-17 20:11:27.857311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.331 [2024-10-17 20:11:27.857382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.899 BaseBdev1_malloc 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.899 [2024-10-17 20:11:28.366749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:42.899 [2024-10-17 20:11:28.367063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.899 [2024-10-17 20:11:28.367242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:42.899 [2024-10-17 20:11:28.367403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.899 [2024-10-17 20:11:28.370141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.899 [2024-10-17 20:11:28.370371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:42.899 BaseBdev1 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.899 BaseBdev2_malloc 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.899 [2024-10-17 20:11:28.419479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:42.899 [2024-10-17 20:11:28.419547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.899 [2024-10-17 20:11:28.419573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:42.899 [2024-10-17 20:11:28.419588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.899 [2024-10-17 20:11:28.422386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.899 [2024-10-17 20:11:28.422634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:42.899 BaseBdev2 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.899 spare_malloc 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.899 spare_delay 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.899 [2024-10-17 20:11:28.498718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:42.899 [2024-10-17 20:11:28.498947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.899 [2024-10-17 20:11:28.498987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:42.899 [2024-10-17 20:11:28.499030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.899 [2024-10-17 20:11:28.502015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.899 [2024-10-17 20:11:28.502065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:42.899 spare 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.899 [2024-10-17 20:11:28.510881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.899 [2024-10-17 20:11:28.513580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:42.899 [2024-10-17 20:11:28.513961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:42.899 [2024-10-17 20:11:28.514120] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:42.899 [2024-10-17 20:11:28.514500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:42.899 [2024-10-17 20:11:28.514834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:42.899 [2024-10-17 20:11:28.514955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:42.899 [2024-10-17 20:11:28.515349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.899 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.158 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.158 "name": "raid_bdev1", 00:14:43.158 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:43.158 "strip_size_kb": 0, 00:14:43.158 "state": "online", 00:14:43.158 "raid_level": "raid1", 00:14:43.158 "superblock": true, 00:14:43.158 "num_base_bdevs": 2, 00:14:43.158 "num_base_bdevs_discovered": 2, 00:14:43.158 "num_base_bdevs_operational": 2, 00:14:43.158 "base_bdevs_list": [ 00:14:43.158 { 00:14:43.158 "name": "BaseBdev1", 00:14:43.158 "uuid": "80da7b06-692d-5257-b83d-0ad2fb22d0cb", 00:14:43.158 "is_configured": true, 00:14:43.158 "data_offset": 2048, 00:14:43.158 "data_size": 63488 00:14:43.158 }, 00:14:43.158 { 00:14:43.158 "name": "BaseBdev2", 00:14:43.158 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:43.158 "is_configured": true, 00:14:43.158 "data_offset": 2048, 00:14:43.158 "data_size": 63488 00:14:43.158 } 00:14:43.158 ] 00:14:43.158 }' 00:14:43.158 20:11:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.158 20:11:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.416 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.416 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.416 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.416 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:43.416 [2024-10-17 20:11:29.043815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.416 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:43.676 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:43.944 [2024-10-17 20:11:29.407582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:43.944 /dev/nbd0 00:14:43.944 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:43.944 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:43.944 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:43.944 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:43.944 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:43.944 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:43.944 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:43.944 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:43.944 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:43.945 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:43.945 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:43.945 1+0 records in 00:14:43.945 1+0 records out 00:14:43.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383477 s, 10.7 MB/s 00:14:43.945 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.945 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:43.945 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.945 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:43.945 20:11:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:43.945 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:43.945 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:43.945 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:43.945 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:43.945 20:11:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:50.503 63488+0 records in 00:14:50.503 63488+0 records out 00:14:50.503 32505856 bytes (33 MB, 31 MiB) copied, 6.12634 s, 5.3 MB/s 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:50.503 [2024-10-17 20:11:35.879762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:50.503 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.504 [2024-10-17 20:11:35.911782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.504 "name": "raid_bdev1", 00:14:50.504 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:50.504 "strip_size_kb": 0, 00:14:50.504 "state": "online", 00:14:50.504 "raid_level": "raid1", 00:14:50.504 "superblock": true, 00:14:50.504 "num_base_bdevs": 2, 00:14:50.504 "num_base_bdevs_discovered": 1, 00:14:50.504 "num_base_bdevs_operational": 1, 00:14:50.504 "base_bdevs_list": [ 00:14:50.504 { 00:14:50.504 "name": null, 00:14:50.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.504 "is_configured": false, 00:14:50.504 "data_offset": 0, 00:14:50.504 "data_size": 63488 00:14:50.504 }, 00:14:50.504 { 00:14:50.504 "name": "BaseBdev2", 00:14:50.504 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:50.504 "is_configured": true, 00:14:50.504 "data_offset": 2048, 00:14:50.504 "data_size": 63488 00:14:50.504 } 00:14:50.504 ] 00:14:50.504 }' 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.504 20:11:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.068 20:11:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:51.068 20:11:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.068 20:11:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.068 [2024-10-17 20:11:36.443987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:51.068 [2024-10-17 20:11:36.461389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:51.068 20:11:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.068 20:11:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:51.068 [2024-10-17 20:11:36.463916] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.002 "name": "raid_bdev1", 00:14:52.002 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:52.002 "strip_size_kb": 0, 00:14:52.002 "state": "online", 00:14:52.002 "raid_level": "raid1", 00:14:52.002 "superblock": true, 00:14:52.002 "num_base_bdevs": 2, 00:14:52.002 "num_base_bdevs_discovered": 2, 00:14:52.002 "num_base_bdevs_operational": 2, 00:14:52.002 "process": { 00:14:52.002 "type": "rebuild", 00:14:52.002 "target": "spare", 00:14:52.002 "progress": { 00:14:52.002 "blocks": 20480, 00:14:52.002 "percent": 32 00:14:52.002 } 00:14:52.002 }, 00:14:52.002 "base_bdevs_list": [ 00:14:52.002 { 00:14:52.002 "name": "spare", 00:14:52.002 "uuid": "b0bd0073-404a-5bfe-8e75-6f800a8a399f", 00:14:52.002 "is_configured": true, 00:14:52.002 "data_offset": 2048, 00:14:52.002 "data_size": 63488 00:14:52.002 }, 00:14:52.002 { 00:14:52.002 "name": "BaseBdev2", 00:14:52.002 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:52.002 "is_configured": true, 00:14:52.002 "data_offset": 2048, 00:14:52.002 "data_size": 63488 00:14:52.002 } 00:14:52.002 ] 00:14:52.002 }' 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.002 20:11:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.002 [2024-10-17 20:11:37.613869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:52.262 [2024-10-17 20:11:37.672622] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:52.262 [2024-10-17 20:11:37.672694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.262 [2024-10-17 20:11:37.672716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:52.262 [2024-10-17 20:11:37.672733] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.262 "name": "raid_bdev1", 00:14:52.262 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:52.262 "strip_size_kb": 0, 00:14:52.262 "state": "online", 00:14:52.262 "raid_level": "raid1", 00:14:52.262 "superblock": true, 00:14:52.262 "num_base_bdevs": 2, 00:14:52.262 "num_base_bdevs_discovered": 1, 00:14:52.262 "num_base_bdevs_operational": 1, 00:14:52.262 "base_bdevs_list": [ 00:14:52.262 { 00:14:52.262 "name": null, 00:14:52.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.262 "is_configured": false, 00:14:52.262 "data_offset": 0, 00:14:52.262 "data_size": 63488 00:14:52.262 }, 00:14:52.262 { 00:14:52.262 "name": "BaseBdev2", 00:14:52.262 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:52.262 "is_configured": true, 00:14:52.262 "data_offset": 2048, 00:14:52.262 "data_size": 63488 00:14:52.262 } 00:14:52.262 ] 00:14:52.262 }' 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.262 20:11:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.829 "name": "raid_bdev1", 00:14:52.829 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:52.829 "strip_size_kb": 0, 00:14:52.829 "state": "online", 00:14:52.829 "raid_level": "raid1", 00:14:52.829 "superblock": true, 00:14:52.829 "num_base_bdevs": 2, 00:14:52.829 "num_base_bdevs_discovered": 1, 00:14:52.829 "num_base_bdevs_operational": 1, 00:14:52.829 "base_bdevs_list": [ 00:14:52.829 { 00:14:52.829 "name": null, 00:14:52.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.829 "is_configured": false, 00:14:52.829 "data_offset": 0, 00:14:52.829 "data_size": 63488 00:14:52.829 }, 00:14:52.829 { 00:14:52.829 "name": "BaseBdev2", 00:14:52.829 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:52.829 "is_configured": true, 00:14:52.829 "data_offset": 2048, 00:14:52.829 "data_size": 63488 00:14:52.829 } 00:14:52.829 ] 00:14:52.829 }' 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.829 [2024-10-17 20:11:38.383341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.829 [2024-10-17 20:11:38.399068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.829 20:11:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:52.829 [2024-10-17 20:11:38.401777] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:53.791 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.791 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.791 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.791 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.791 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.791 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.791 20:11:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.791 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.791 20:11:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.791 20:11:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.051 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.051 "name": "raid_bdev1", 00:14:54.051 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:54.052 "strip_size_kb": 0, 00:14:54.052 "state": "online", 00:14:54.052 "raid_level": "raid1", 00:14:54.052 "superblock": true, 00:14:54.052 "num_base_bdevs": 2, 00:14:54.052 "num_base_bdevs_discovered": 2, 00:14:54.052 "num_base_bdevs_operational": 2, 00:14:54.052 "process": { 00:14:54.052 "type": "rebuild", 00:14:54.052 "target": "spare", 00:14:54.052 "progress": { 00:14:54.052 "blocks": 20480, 00:14:54.052 "percent": 32 00:14:54.052 } 00:14:54.052 }, 00:14:54.052 "base_bdevs_list": [ 00:14:54.052 { 00:14:54.052 "name": "spare", 00:14:54.052 "uuid": "b0bd0073-404a-5bfe-8e75-6f800a8a399f", 00:14:54.052 "is_configured": true, 00:14:54.052 "data_offset": 2048, 00:14:54.052 "data_size": 63488 00:14:54.052 }, 00:14:54.052 { 00:14:54.052 "name": "BaseBdev2", 00:14:54.052 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:54.052 "is_configured": true, 00:14:54.052 "data_offset": 2048, 00:14:54.052 "data_size": 63488 00:14:54.052 } 00:14:54.052 ] 00:14:54.052 }' 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:54.052 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=414 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.052 "name": "raid_bdev1", 00:14:54.052 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:54.052 "strip_size_kb": 0, 00:14:54.052 "state": "online", 00:14:54.052 "raid_level": "raid1", 00:14:54.052 "superblock": true, 00:14:54.052 "num_base_bdevs": 2, 00:14:54.052 "num_base_bdevs_discovered": 2, 00:14:54.052 "num_base_bdevs_operational": 2, 00:14:54.052 "process": { 00:14:54.052 "type": "rebuild", 00:14:54.052 "target": "spare", 00:14:54.052 "progress": { 00:14:54.052 "blocks": 22528, 00:14:54.052 "percent": 35 00:14:54.052 } 00:14:54.052 }, 00:14:54.052 "base_bdevs_list": [ 00:14:54.052 { 00:14:54.052 "name": "spare", 00:14:54.052 "uuid": "b0bd0073-404a-5bfe-8e75-6f800a8a399f", 00:14:54.052 "is_configured": true, 00:14:54.052 "data_offset": 2048, 00:14:54.052 "data_size": 63488 00:14:54.052 }, 00:14:54.052 { 00:14:54.052 "name": "BaseBdev2", 00:14:54.052 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:54.052 "is_configured": true, 00:14:54.052 "data_offset": 2048, 00:14:54.052 "data_size": 63488 00:14:54.052 } 00:14:54.052 ] 00:14:54.052 }' 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.052 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.311 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.311 20:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.246 "name": "raid_bdev1", 00:14:55.246 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:55.246 "strip_size_kb": 0, 00:14:55.246 "state": "online", 00:14:55.246 "raid_level": "raid1", 00:14:55.246 "superblock": true, 00:14:55.246 "num_base_bdevs": 2, 00:14:55.246 "num_base_bdevs_discovered": 2, 00:14:55.246 "num_base_bdevs_operational": 2, 00:14:55.246 "process": { 00:14:55.246 "type": "rebuild", 00:14:55.246 "target": "spare", 00:14:55.246 "progress": { 00:14:55.246 "blocks": 47104, 00:14:55.246 "percent": 74 00:14:55.246 } 00:14:55.246 }, 00:14:55.246 "base_bdevs_list": [ 00:14:55.246 { 00:14:55.246 "name": "spare", 00:14:55.246 "uuid": "b0bd0073-404a-5bfe-8e75-6f800a8a399f", 00:14:55.246 "is_configured": true, 00:14:55.246 "data_offset": 2048, 00:14:55.246 "data_size": 63488 00:14:55.246 }, 00:14:55.246 { 00:14:55.246 "name": "BaseBdev2", 00:14:55.246 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:55.246 "is_configured": true, 00:14:55.246 "data_offset": 2048, 00:14:55.246 "data_size": 63488 00:14:55.246 } 00:14:55.246 ] 00:14:55.246 }' 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.246 20:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.182 [2024-10-17 20:11:41.524947] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:56.182 [2024-10-17 20:11:41.525098] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:56.182 [2024-10-17 20:11:41.525262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.441 20:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.441 20:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.441 20:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.441 20:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.441 20:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.441 20:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.441 20:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.441 20:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.441 20:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.441 20:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.441 20:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.441 20:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.441 "name": "raid_bdev1", 00:14:56.441 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:56.441 "strip_size_kb": 0, 00:14:56.441 "state": "online", 00:14:56.441 "raid_level": "raid1", 00:14:56.441 "superblock": true, 00:14:56.441 "num_base_bdevs": 2, 00:14:56.441 "num_base_bdevs_discovered": 2, 00:14:56.441 "num_base_bdevs_operational": 2, 00:14:56.441 "base_bdevs_list": [ 00:14:56.441 { 00:14:56.441 "name": "spare", 00:14:56.441 "uuid": "b0bd0073-404a-5bfe-8e75-6f800a8a399f", 00:14:56.441 "is_configured": true, 00:14:56.441 "data_offset": 2048, 00:14:56.441 "data_size": 63488 00:14:56.441 }, 00:14:56.441 { 00:14:56.441 "name": "BaseBdev2", 00:14:56.441 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:56.441 "is_configured": true, 00:14:56.441 "data_offset": 2048, 00:14:56.441 "data_size": 63488 00:14:56.441 } 00:14:56.441 ] 00:14:56.441 }' 00:14:56.441 20:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.442 20:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:56.442 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.442 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:56.442 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:56.442 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.442 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.442 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.442 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.442 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.442 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.442 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.442 20:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.442 20:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.442 20:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.701 "name": "raid_bdev1", 00:14:56.701 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:56.701 "strip_size_kb": 0, 00:14:56.701 "state": "online", 00:14:56.701 "raid_level": "raid1", 00:14:56.701 "superblock": true, 00:14:56.701 "num_base_bdevs": 2, 00:14:56.701 "num_base_bdevs_discovered": 2, 00:14:56.701 "num_base_bdevs_operational": 2, 00:14:56.701 "base_bdevs_list": [ 00:14:56.701 { 00:14:56.701 "name": "spare", 00:14:56.701 "uuid": "b0bd0073-404a-5bfe-8e75-6f800a8a399f", 00:14:56.701 "is_configured": true, 00:14:56.701 "data_offset": 2048, 00:14:56.701 "data_size": 63488 00:14:56.701 }, 00:14:56.701 { 00:14:56.701 "name": "BaseBdev2", 00:14:56.701 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:56.701 "is_configured": true, 00:14:56.701 "data_offset": 2048, 00:14:56.701 "data_size": 63488 00:14:56.701 } 00:14:56.701 ] 00:14:56.701 }' 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.701 "name": "raid_bdev1", 00:14:56.701 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:56.701 "strip_size_kb": 0, 00:14:56.701 "state": "online", 00:14:56.701 "raid_level": "raid1", 00:14:56.701 "superblock": true, 00:14:56.701 "num_base_bdevs": 2, 00:14:56.701 "num_base_bdevs_discovered": 2, 00:14:56.701 "num_base_bdevs_operational": 2, 00:14:56.701 "base_bdevs_list": [ 00:14:56.701 { 00:14:56.701 "name": "spare", 00:14:56.701 "uuid": "b0bd0073-404a-5bfe-8e75-6f800a8a399f", 00:14:56.701 "is_configured": true, 00:14:56.701 "data_offset": 2048, 00:14:56.701 "data_size": 63488 00:14:56.701 }, 00:14:56.701 { 00:14:56.701 "name": "BaseBdev2", 00:14:56.701 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:56.701 "is_configured": true, 00:14:56.701 "data_offset": 2048, 00:14:56.701 "data_size": 63488 00:14:56.701 } 00:14:56.701 ] 00:14:56.701 }' 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.701 20:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.268 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:57.268 20:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.268 20:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.268 [2024-10-17 20:11:42.767487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.268 [2024-10-17 20:11:42.767544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.268 [2024-10-17 20:11:42.767651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.269 [2024-10-17 20:11:42.767737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.269 [2024-10-17 20:11:42.767754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:57.269 20:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:57.528 /dev/nbd0 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.528 1+0 records in 00:14:57.528 1+0 records out 00:14:57.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306578 s, 13.4 MB/s 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.528 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:57.529 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:58.097 /dev/nbd1 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.097 1+0 records in 00:14:58.097 1+0 records out 00:14:58.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410927 s, 10.0 MB/s 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.097 20:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:58.357 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:58.357 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:58.357 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:58.357 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.357 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.357 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:58.616 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:58.616 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.616 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.616 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:58.616 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:58.616 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:58.616 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:58.616 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.616 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.616 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.875 [2024-10-17 20:11:44.284685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:58.875 [2024-10-17 20:11:44.285280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.875 [2024-10-17 20:11:44.285341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:58.875 [2024-10-17 20:11:44.285363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.875 [2024-10-17 20:11:44.288574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.875 [2024-10-17 20:11:44.288628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:58.875 [2024-10-17 20:11:44.288777] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:58.875 [2024-10-17 20:11:44.288863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:58.875 [2024-10-17 20:11:44.289158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.875 spare 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.875 [2024-10-17 20:11:44.389288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:58.875 [2024-10-17 20:11:44.389321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:58.875 [2024-10-17 20:11:44.389628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:58.875 [2024-10-17 20:11:44.389820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:58.875 [2024-10-17 20:11:44.389840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:58.875 [2024-10-17 20:11:44.390058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.875 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.875 "name": "raid_bdev1", 00:14:58.875 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:58.875 "strip_size_kb": 0, 00:14:58.875 "state": "online", 00:14:58.875 "raid_level": "raid1", 00:14:58.875 "superblock": true, 00:14:58.875 "num_base_bdevs": 2, 00:14:58.875 "num_base_bdevs_discovered": 2, 00:14:58.875 "num_base_bdevs_operational": 2, 00:14:58.875 "base_bdevs_list": [ 00:14:58.875 { 00:14:58.875 "name": "spare", 00:14:58.875 "uuid": "b0bd0073-404a-5bfe-8e75-6f800a8a399f", 00:14:58.875 "is_configured": true, 00:14:58.875 "data_offset": 2048, 00:14:58.876 "data_size": 63488 00:14:58.876 }, 00:14:58.876 { 00:14:58.876 "name": "BaseBdev2", 00:14:58.876 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:58.876 "is_configured": true, 00:14:58.876 "data_offset": 2048, 00:14:58.876 "data_size": 63488 00:14:58.876 } 00:14:58.876 ] 00:14:58.876 }' 00:14:58.876 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.876 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.443 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.443 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.443 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.443 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.443 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.443 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.443 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.443 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.443 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.443 20:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.443 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.443 "name": "raid_bdev1", 00:14:59.443 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:59.443 "strip_size_kb": 0, 00:14:59.443 "state": "online", 00:14:59.443 "raid_level": "raid1", 00:14:59.443 "superblock": true, 00:14:59.443 "num_base_bdevs": 2, 00:14:59.443 "num_base_bdevs_discovered": 2, 00:14:59.443 "num_base_bdevs_operational": 2, 00:14:59.443 "base_bdevs_list": [ 00:14:59.443 { 00:14:59.443 "name": "spare", 00:14:59.443 "uuid": "b0bd0073-404a-5bfe-8e75-6f800a8a399f", 00:14:59.443 "is_configured": true, 00:14:59.443 "data_offset": 2048, 00:14:59.443 "data_size": 63488 00:14:59.443 }, 00:14:59.443 { 00:14:59.443 "name": "BaseBdev2", 00:14:59.443 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:59.443 "is_configured": true, 00:14:59.443 "data_offset": 2048, 00:14:59.443 "data_size": 63488 00:14:59.443 } 00:14:59.443 ] 00:14:59.443 }' 00:14:59.443 20:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.443 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.443 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.443 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.443 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.443 20:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.443 20:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.443 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.702 [2024-10-17 20:11:45.137168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.702 "name": "raid_bdev1", 00:14:59.702 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:14:59.702 "strip_size_kb": 0, 00:14:59.702 "state": "online", 00:14:59.702 "raid_level": "raid1", 00:14:59.702 "superblock": true, 00:14:59.702 "num_base_bdevs": 2, 00:14:59.702 "num_base_bdevs_discovered": 1, 00:14:59.702 "num_base_bdevs_operational": 1, 00:14:59.702 "base_bdevs_list": [ 00:14:59.702 { 00:14:59.702 "name": null, 00:14:59.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.702 "is_configured": false, 00:14:59.702 "data_offset": 0, 00:14:59.702 "data_size": 63488 00:14:59.702 }, 00:14:59.702 { 00:14:59.702 "name": "BaseBdev2", 00:14:59.702 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:14:59.702 "is_configured": true, 00:14:59.702 "data_offset": 2048, 00:14:59.702 "data_size": 63488 00:14:59.702 } 00:14:59.702 ] 00:14:59.702 }' 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.702 20:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.269 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:00.269 20:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.269 20:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.269 [2024-10-17 20:11:45.681823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.269 [2024-10-17 20:11:45.682111] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:00.269 [2024-10-17 20:11:45.682139] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:00.269 [2024-10-17 20:11:45.682189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.269 [2024-10-17 20:11:45.697458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:15:00.269 20:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.269 20:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:00.269 [2024-10-17 20:11:45.700143] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.204 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.204 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.204 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.204 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.204 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.204 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.204 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.204 20:11:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.204 20:11:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.204 20:11:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.204 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.204 "name": "raid_bdev1", 00:15:01.204 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:15:01.204 "strip_size_kb": 0, 00:15:01.204 "state": "online", 00:15:01.204 "raid_level": "raid1", 00:15:01.204 "superblock": true, 00:15:01.204 "num_base_bdevs": 2, 00:15:01.204 "num_base_bdevs_discovered": 2, 00:15:01.204 "num_base_bdevs_operational": 2, 00:15:01.204 "process": { 00:15:01.204 "type": "rebuild", 00:15:01.204 "target": "spare", 00:15:01.204 "progress": { 00:15:01.204 "blocks": 20480, 00:15:01.204 "percent": 32 00:15:01.204 } 00:15:01.204 }, 00:15:01.204 "base_bdevs_list": [ 00:15:01.204 { 00:15:01.205 "name": "spare", 00:15:01.205 "uuid": "b0bd0073-404a-5bfe-8e75-6f800a8a399f", 00:15:01.205 "is_configured": true, 00:15:01.205 "data_offset": 2048, 00:15:01.205 "data_size": 63488 00:15:01.205 }, 00:15:01.205 { 00:15:01.205 "name": "BaseBdev2", 00:15:01.205 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:15:01.205 "is_configured": true, 00:15:01.205 "data_offset": 2048, 00:15:01.205 "data_size": 63488 00:15:01.205 } 00:15:01.205 ] 00:15:01.205 }' 00:15:01.205 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.205 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.205 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.472 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.472 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:01.472 20:11:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.472 20:11:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.472 [2024-10-17 20:11:46.870270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.472 [2024-10-17 20:11:46.908699] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:01.472 [2024-10-17 20:11:46.908996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.472 [2024-10-17 20:11:46.909043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.472 [2024-10-17 20:11:46.909078] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:01.472 20:11:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.472 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:01.472 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.472 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.473 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.473 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.473 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:01.473 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.473 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.473 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.473 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.473 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.473 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.473 20:11:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.473 20:11:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.473 20:11:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.473 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.473 "name": "raid_bdev1", 00:15:01.473 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:15:01.473 "strip_size_kb": 0, 00:15:01.473 "state": "online", 00:15:01.473 "raid_level": "raid1", 00:15:01.473 "superblock": true, 00:15:01.473 "num_base_bdevs": 2, 00:15:01.473 "num_base_bdevs_discovered": 1, 00:15:01.473 "num_base_bdevs_operational": 1, 00:15:01.473 "base_bdevs_list": [ 00:15:01.473 { 00:15:01.473 "name": null, 00:15:01.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.473 "is_configured": false, 00:15:01.473 "data_offset": 0, 00:15:01.473 "data_size": 63488 00:15:01.473 }, 00:15:01.473 { 00:15:01.473 "name": "BaseBdev2", 00:15:01.473 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:15:01.473 "is_configured": true, 00:15:01.473 "data_offset": 2048, 00:15:01.473 "data_size": 63488 00:15:01.474 } 00:15:01.474 ] 00:15:01.474 }' 00:15:01.474 20:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.474 20:11:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.042 20:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:02.042 20:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.042 20:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.042 [2024-10-17 20:11:47.478184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:02.042 [2024-10-17 20:11:47.478442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.042 [2024-10-17 20:11:47.478482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:02.042 [2024-10-17 20:11:47.478501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.042 [2024-10-17 20:11:47.479228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.042 [2024-10-17 20:11:47.479270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:02.042 [2024-10-17 20:11:47.479386] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:02.042 [2024-10-17 20:11:47.479409] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:02.042 [2024-10-17 20:11:47.479423] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:02.042 [2024-10-17 20:11:47.479468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.042 [2024-10-17 20:11:47.495470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:02.042 spare 00:15:02.042 20:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.042 20:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:02.042 [2024-10-17 20:11:47.498219] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.977 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.977 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.977 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.977 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.977 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.977 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.977 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.977 20:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.977 20:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.977 20:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.977 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.977 "name": "raid_bdev1", 00:15:02.977 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:15:02.977 "strip_size_kb": 0, 00:15:02.977 "state": "online", 00:15:02.977 "raid_level": "raid1", 00:15:02.977 "superblock": true, 00:15:02.977 "num_base_bdevs": 2, 00:15:02.977 "num_base_bdevs_discovered": 2, 00:15:02.977 "num_base_bdevs_operational": 2, 00:15:02.977 "process": { 00:15:02.977 "type": "rebuild", 00:15:02.977 "target": "spare", 00:15:02.977 "progress": { 00:15:02.977 "blocks": 20480, 00:15:02.977 "percent": 32 00:15:02.977 } 00:15:02.977 }, 00:15:02.977 "base_bdevs_list": [ 00:15:02.977 { 00:15:02.977 "name": "spare", 00:15:02.977 "uuid": "b0bd0073-404a-5bfe-8e75-6f800a8a399f", 00:15:02.977 "is_configured": true, 00:15:02.977 "data_offset": 2048, 00:15:02.977 "data_size": 63488 00:15:02.977 }, 00:15:02.977 { 00:15:02.977 "name": "BaseBdev2", 00:15:02.977 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:15:02.977 "is_configured": true, 00:15:02.977 "data_offset": 2048, 00:15:02.978 "data_size": 63488 00:15:02.978 } 00:15:02.978 ] 00:15:02.978 }' 00:15:02.978 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.978 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.978 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.236 [2024-10-17 20:11:48.679511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.236 [2024-10-17 20:11:48.707317] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:03.236 [2024-10-17 20:11:48.707586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.236 [2024-10-17 20:11:48.707857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.236 [2024-10-17 20:11:48.707979] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.236 "name": "raid_bdev1", 00:15:03.236 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:15:03.236 "strip_size_kb": 0, 00:15:03.236 "state": "online", 00:15:03.236 "raid_level": "raid1", 00:15:03.236 "superblock": true, 00:15:03.236 "num_base_bdevs": 2, 00:15:03.236 "num_base_bdevs_discovered": 1, 00:15:03.236 "num_base_bdevs_operational": 1, 00:15:03.236 "base_bdevs_list": [ 00:15:03.236 { 00:15:03.236 "name": null, 00:15:03.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.236 "is_configured": false, 00:15:03.236 "data_offset": 0, 00:15:03.236 "data_size": 63488 00:15:03.236 }, 00:15:03.236 { 00:15:03.236 "name": "BaseBdev2", 00:15:03.236 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:15:03.236 "is_configured": true, 00:15:03.236 "data_offset": 2048, 00:15:03.236 "data_size": 63488 00:15:03.236 } 00:15:03.236 ] 00:15:03.236 }' 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.236 20:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.804 "name": "raid_bdev1", 00:15:03.804 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:15:03.804 "strip_size_kb": 0, 00:15:03.804 "state": "online", 00:15:03.804 "raid_level": "raid1", 00:15:03.804 "superblock": true, 00:15:03.804 "num_base_bdevs": 2, 00:15:03.804 "num_base_bdevs_discovered": 1, 00:15:03.804 "num_base_bdevs_operational": 1, 00:15:03.804 "base_bdevs_list": [ 00:15:03.804 { 00:15:03.804 "name": null, 00:15:03.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.804 "is_configured": false, 00:15:03.804 "data_offset": 0, 00:15:03.804 "data_size": 63488 00:15:03.804 }, 00:15:03.804 { 00:15:03.804 "name": "BaseBdev2", 00:15:03.804 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:15:03.804 "is_configured": true, 00:15:03.804 "data_offset": 2048, 00:15:03.804 "data_size": 63488 00:15:03.804 } 00:15:03.804 ] 00:15:03.804 }' 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.804 [2024-10-17 20:11:49.438682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:03.804 [2024-10-17 20:11:49.438745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.804 [2024-10-17 20:11:49.438779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:03.804 [2024-10-17 20:11:49.438803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.804 [2024-10-17 20:11:49.439415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.804 [2024-10-17 20:11:49.439456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:03.804 [2024-10-17 20:11:49.439561] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:03.804 [2024-10-17 20:11:49.439581] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:03.804 [2024-10-17 20:11:49.439595] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:03.804 [2024-10-17 20:11:49.439608] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:03.804 BaseBdev1 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.804 20:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.180 "name": "raid_bdev1", 00:15:05.180 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:15:05.180 "strip_size_kb": 0, 00:15:05.180 "state": "online", 00:15:05.180 "raid_level": "raid1", 00:15:05.180 "superblock": true, 00:15:05.180 "num_base_bdevs": 2, 00:15:05.180 "num_base_bdevs_discovered": 1, 00:15:05.180 "num_base_bdevs_operational": 1, 00:15:05.180 "base_bdevs_list": [ 00:15:05.180 { 00:15:05.180 "name": null, 00:15:05.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.180 "is_configured": false, 00:15:05.180 "data_offset": 0, 00:15:05.180 "data_size": 63488 00:15:05.180 }, 00:15:05.180 { 00:15:05.180 "name": "BaseBdev2", 00:15:05.180 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:15:05.180 "is_configured": true, 00:15:05.180 "data_offset": 2048, 00:15:05.180 "data_size": 63488 00:15:05.180 } 00:15:05.180 ] 00:15:05.180 }' 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.180 20:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.439 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.439 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.439 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.439 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.439 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.439 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.439 20:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.439 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.439 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.439 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.439 20:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.439 "name": "raid_bdev1", 00:15:05.439 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:15:05.439 "strip_size_kb": 0, 00:15:05.439 "state": "online", 00:15:05.439 "raid_level": "raid1", 00:15:05.439 "superblock": true, 00:15:05.439 "num_base_bdevs": 2, 00:15:05.439 "num_base_bdevs_discovered": 1, 00:15:05.439 "num_base_bdevs_operational": 1, 00:15:05.439 "base_bdevs_list": [ 00:15:05.439 { 00:15:05.439 "name": null, 00:15:05.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.439 "is_configured": false, 00:15:05.439 "data_offset": 0, 00:15:05.439 "data_size": 63488 00:15:05.439 }, 00:15:05.439 { 00:15:05.439 "name": "BaseBdev2", 00:15:05.439 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:15:05.439 "is_configured": true, 00:15:05.439 "data_offset": 2048, 00:15:05.439 "data_size": 63488 00:15:05.439 } 00:15:05.439 ] 00:15:05.439 }' 00:15:05.439 20:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.698 [2024-10-17 20:11:51.167468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.698 [2024-10-17 20:11:51.167682] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:05.698 [2024-10-17 20:11:51.167704] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:05.698 request: 00:15:05.698 { 00:15:05.698 "base_bdev": "BaseBdev1", 00:15:05.698 "raid_bdev": "raid_bdev1", 00:15:05.698 "method": "bdev_raid_add_base_bdev", 00:15:05.698 "req_id": 1 00:15:05.698 } 00:15:05.698 Got JSON-RPC error response 00:15:05.698 response: 00:15:05.698 { 00:15:05.698 "code": -22, 00:15:05.698 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:05.698 } 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:05.698 20:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.634 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.634 "name": "raid_bdev1", 00:15:06.634 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:15:06.634 "strip_size_kb": 0, 00:15:06.634 "state": "online", 00:15:06.634 "raid_level": "raid1", 00:15:06.634 "superblock": true, 00:15:06.634 "num_base_bdevs": 2, 00:15:06.634 "num_base_bdevs_discovered": 1, 00:15:06.634 "num_base_bdevs_operational": 1, 00:15:06.634 "base_bdevs_list": [ 00:15:06.634 { 00:15:06.634 "name": null, 00:15:06.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.634 "is_configured": false, 00:15:06.634 "data_offset": 0, 00:15:06.634 "data_size": 63488 00:15:06.634 }, 00:15:06.634 { 00:15:06.635 "name": "BaseBdev2", 00:15:06.635 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:15:06.635 "is_configured": true, 00:15:06.635 "data_offset": 2048, 00:15:06.635 "data_size": 63488 00:15:06.635 } 00:15:06.635 ] 00:15:06.635 }' 00:15:06.635 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.635 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.202 "name": "raid_bdev1", 00:15:07.202 "uuid": "6320130d-478d-42b4-a192-72338b968f1f", 00:15:07.202 "strip_size_kb": 0, 00:15:07.202 "state": "online", 00:15:07.202 "raid_level": "raid1", 00:15:07.202 "superblock": true, 00:15:07.202 "num_base_bdevs": 2, 00:15:07.202 "num_base_bdevs_discovered": 1, 00:15:07.202 "num_base_bdevs_operational": 1, 00:15:07.202 "base_bdevs_list": [ 00:15:07.202 { 00:15:07.202 "name": null, 00:15:07.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.202 "is_configured": false, 00:15:07.202 "data_offset": 0, 00:15:07.202 "data_size": 63488 00:15:07.202 }, 00:15:07.202 { 00:15:07.202 "name": "BaseBdev2", 00:15:07.202 "uuid": "36be9fe0-d9f7-51d6-8434-abcbd291f458", 00:15:07.202 "is_configured": true, 00:15:07.202 "data_offset": 2048, 00:15:07.202 "data_size": 63488 00:15:07.202 } 00:15:07.202 ] 00:15:07.202 }' 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.202 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.461 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.461 20:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75823 00:15:07.461 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75823 ']' 00:15:07.461 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 75823 00:15:07.461 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:07.461 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.461 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75823 00:15:07.461 killing process with pid 75823 00:15:07.461 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:07.461 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:07.461 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75823' 00:15:07.461 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 75823 00:15:07.461 Received shutdown signal, test time was about 60.000000 seconds 00:15:07.461 00:15:07.461 Latency(us) 00:15:07.461 [2024-10-17T20:11:53.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.461 [2024-10-17T20:11:53.115Z] =================================================================================================================== 00:15:07.461 [2024-10-17T20:11:53.115Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:07.461 [2024-10-17 20:11:52.921963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.461 20:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 75823 00:15:07.461 [2024-10-17 20:11:52.922149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.461 [2024-10-17 20:11:52.922237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.461 [2024-10-17 20:11:52.922260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:07.721 [2024-10-17 20:11:53.189419] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:08.657 00:15:08.657 real 0m26.924s 00:15:08.657 user 0m33.055s 00:15:08.657 sys 0m4.263s 00:15:08.657 ************************************ 00:15:08.657 END TEST raid_rebuild_test_sb 00:15:08.657 ************************************ 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.657 20:11:54 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:15:08.657 20:11:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:08.657 20:11:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.657 20:11:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:08.657 ************************************ 00:15:08.657 START TEST raid_rebuild_test_io 00:15:08.657 ************************************ 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76593 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76593 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 76593 ']' 00:15:08.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.657 20:11:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:08.916 [2024-10-17 20:11:54.362355] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:15:08.916 [2024-10-17 20:11:54.363091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:08.916 Zero copy mechanism will not be used. 00:15:08.916 -allocations --file-prefix=spdk_pid76593 ] 00:15:08.916 [2024-10-17 20:11:54.540628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.175 [2024-10-17 20:11:54.675162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.433 [2024-10-17 20:11:54.868210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.434 [2024-10-17 20:11:54.868562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.001 BaseBdev1_malloc 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.001 [2024-10-17 20:11:55.419344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:10.001 [2024-10-17 20:11:55.419461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.001 [2024-10-17 20:11:55.419492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:10.001 [2024-10-17 20:11:55.419510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.001 [2024-10-17 20:11:55.422433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.001 [2024-10-17 20:11:55.422497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:10.001 BaseBdev1 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.001 BaseBdev2_malloc 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.001 [2024-10-17 20:11:55.472935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:10.001 [2024-10-17 20:11:55.473051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.001 [2024-10-17 20:11:55.473080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:10.001 [2024-10-17 20:11:55.473097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.001 [2024-10-17 20:11:55.475782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.001 [2024-10-17 20:11:55.475827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:10.001 BaseBdev2 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.001 spare_malloc 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.001 spare_delay 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.001 [2024-10-17 20:11:55.547093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.001 [2024-10-17 20:11:55.547194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.001 [2024-10-17 20:11:55.547228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:10.001 [2024-10-17 20:11:55.547247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.001 [2024-10-17 20:11:55.550245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.001 [2024-10-17 20:11:55.550297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.001 spare 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.001 [2024-10-17 20:11:55.555137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.001 [2024-10-17 20:11:55.557805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.001 [2024-10-17 20:11:55.558129] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:10.001 [2024-10-17 20:11:55.558298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:10.001 [2024-10-17 20:11:55.558717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:10.001 [2024-10-17 20:11:55.559140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:10.001 [2024-10-17 20:11:55.559266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:10.001 [2024-10-17 20:11:55.559719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.001 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.001 "name": "raid_bdev1", 00:15:10.001 "uuid": "dd56d4b1-1cc5-4cb0-8b86-27556e32bc9d", 00:15:10.001 "strip_size_kb": 0, 00:15:10.001 "state": "online", 00:15:10.001 "raid_level": "raid1", 00:15:10.001 "superblock": false, 00:15:10.001 "num_base_bdevs": 2, 00:15:10.001 "num_base_bdevs_discovered": 2, 00:15:10.001 "num_base_bdevs_operational": 2, 00:15:10.001 "base_bdevs_list": [ 00:15:10.001 { 00:15:10.001 "name": "BaseBdev1", 00:15:10.002 "uuid": "c8b6177b-85ca-5a7e-8ffd-379bc263e91d", 00:15:10.002 "is_configured": true, 00:15:10.002 "data_offset": 0, 00:15:10.002 "data_size": 65536 00:15:10.002 }, 00:15:10.002 { 00:15:10.002 "name": "BaseBdev2", 00:15:10.002 "uuid": "26f973df-27fb-5c21-968b-f5b9dbaa0ac5", 00:15:10.002 "is_configured": true, 00:15:10.002 "data_offset": 0, 00:15:10.002 "data_size": 65536 00:15:10.002 } 00:15:10.002 ] 00:15:10.002 }' 00:15:10.002 20:11:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.002 20:11:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.569 [2024-10-17 20:11:56.076255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.569 [2024-10-17 20:11:56.171806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.569 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.827 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.827 "name": "raid_bdev1", 00:15:10.827 "uuid": "dd56d4b1-1cc5-4cb0-8b86-27556e32bc9d", 00:15:10.827 "strip_size_kb": 0, 00:15:10.827 "state": "online", 00:15:10.827 "raid_level": "raid1", 00:15:10.827 "superblock": false, 00:15:10.827 "num_base_bdevs": 2, 00:15:10.827 "num_base_bdevs_discovered": 1, 00:15:10.827 "num_base_bdevs_operational": 1, 00:15:10.827 "base_bdevs_list": [ 00:15:10.827 { 00:15:10.827 "name": null, 00:15:10.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.827 "is_configured": false, 00:15:10.827 "data_offset": 0, 00:15:10.827 "data_size": 65536 00:15:10.827 }, 00:15:10.827 { 00:15:10.827 "name": "BaseBdev2", 00:15:10.827 "uuid": "26f973df-27fb-5c21-968b-f5b9dbaa0ac5", 00:15:10.827 "is_configured": true, 00:15:10.827 "data_offset": 0, 00:15:10.827 "data_size": 65536 00:15:10.827 } 00:15:10.827 ] 00:15:10.827 }' 00:15:10.827 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.827 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.827 [2024-10-17 20:11:56.307711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:10.827 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:10.827 Zero copy mechanism will not be used. 00:15:10.827 Running I/O for 60 seconds... 00:15:11.091 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:11.091 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.091 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.091 [2024-10-17 20:11:56.711438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.361 20:11:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.361 20:11:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:11.361 [2024-10-17 20:11:56.762868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:11.361 [2024-10-17 20:11:56.765454] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.361 [2024-10-17 20:11:56.882307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:11.361 [2024-10-17 20:11:56.882967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:11.619 [2024-10-17 20:11:57.100517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:11.619 [2024-10-17 20:11:57.101264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:11.877 145.00 IOPS, 435.00 MiB/s [2024-10-17T20:11:57.531Z] [2024-10-17 20:11:57.370152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:11.877 [2024-10-17 20:11:57.371015] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:12.135 [2024-10-17 20:11:57.589375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:12.135 [2024-10-17 20:11:57.589755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:12.135 20:11:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.135 20:11:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.135 20:11:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.135 20:11:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.135 20:11:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.135 20:11:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.135 20:11:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.135 20:11:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.135 20:11:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.394 20:11:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.394 20:11:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.394 "name": "raid_bdev1", 00:15:12.394 "uuid": "dd56d4b1-1cc5-4cb0-8b86-27556e32bc9d", 00:15:12.394 "strip_size_kb": 0, 00:15:12.394 "state": "online", 00:15:12.394 "raid_level": "raid1", 00:15:12.394 "superblock": false, 00:15:12.394 "num_base_bdevs": 2, 00:15:12.394 "num_base_bdevs_discovered": 2, 00:15:12.394 "num_base_bdevs_operational": 2, 00:15:12.394 "process": { 00:15:12.394 "type": "rebuild", 00:15:12.394 "target": "spare", 00:15:12.394 "progress": { 00:15:12.394 "blocks": 10240, 00:15:12.394 "percent": 15 00:15:12.394 } 00:15:12.394 }, 00:15:12.394 "base_bdevs_list": [ 00:15:12.394 { 00:15:12.394 "name": "spare", 00:15:12.394 "uuid": "d38d3aa8-b210-5806-8421-a07a3f42058e", 00:15:12.394 "is_configured": true, 00:15:12.394 "data_offset": 0, 00:15:12.394 "data_size": 65536 00:15:12.394 }, 00:15:12.394 { 00:15:12.394 "name": "BaseBdev2", 00:15:12.394 "uuid": "26f973df-27fb-5c21-968b-f5b9dbaa0ac5", 00:15:12.394 "is_configured": true, 00:15:12.394 "data_offset": 0, 00:15:12.394 "data_size": 65536 00:15:12.394 } 00:15:12.394 ] 00:15:12.394 }' 00:15:12.394 20:11:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.394 20:11:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.394 20:11:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.394 20:11:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.394 20:11:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:12.394 20:11:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.394 20:11:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.394 [2024-10-17 20:11:57.935666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.394 [2024-10-17 20:11:57.935788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:12.653 [2024-10-17 20:11:58.053718] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:12.653 [2024-10-17 20:11:58.064000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.653 [2024-10-17 20:11:58.064263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.653 [2024-10-17 20:11:58.064294] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:12.653 [2024-10-17 20:11:58.101122] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.653 "name": "raid_bdev1", 00:15:12.653 "uuid": "dd56d4b1-1cc5-4cb0-8b86-27556e32bc9d", 00:15:12.653 "strip_size_kb": 0, 00:15:12.653 "state": "online", 00:15:12.653 "raid_level": "raid1", 00:15:12.653 "superblock": false, 00:15:12.653 "num_base_bdevs": 2, 00:15:12.653 "num_base_bdevs_discovered": 1, 00:15:12.653 "num_base_bdevs_operational": 1, 00:15:12.653 "base_bdevs_list": [ 00:15:12.653 { 00:15:12.653 "name": null, 00:15:12.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.653 "is_configured": false, 00:15:12.653 "data_offset": 0, 00:15:12.653 "data_size": 65536 00:15:12.653 }, 00:15:12.653 { 00:15:12.653 "name": "BaseBdev2", 00:15:12.653 "uuid": "26f973df-27fb-5c21-968b-f5b9dbaa0ac5", 00:15:12.653 "is_configured": true, 00:15:12.653 "data_offset": 0, 00:15:12.653 "data_size": 65536 00:15:12.653 } 00:15:12.653 ] 00:15:12.653 }' 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.653 20:11:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.170 118.00 IOPS, 354.00 MiB/s [2024-10-17T20:11:58.824Z] 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.170 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.170 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.170 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.170 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.170 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.170 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.170 20:11:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.170 20:11:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.170 20:11:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.170 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.170 "name": "raid_bdev1", 00:15:13.170 "uuid": "dd56d4b1-1cc5-4cb0-8b86-27556e32bc9d", 00:15:13.170 "strip_size_kb": 0, 00:15:13.170 "state": "online", 00:15:13.170 "raid_level": "raid1", 00:15:13.170 "superblock": false, 00:15:13.170 "num_base_bdevs": 2, 00:15:13.170 "num_base_bdevs_discovered": 1, 00:15:13.170 "num_base_bdevs_operational": 1, 00:15:13.170 "base_bdevs_list": [ 00:15:13.170 { 00:15:13.170 "name": null, 00:15:13.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.170 "is_configured": false, 00:15:13.170 "data_offset": 0, 00:15:13.170 "data_size": 65536 00:15:13.170 }, 00:15:13.170 { 00:15:13.170 "name": "BaseBdev2", 00:15:13.170 "uuid": "26f973df-27fb-5c21-968b-f5b9dbaa0ac5", 00:15:13.170 "is_configured": true, 00:15:13.170 "data_offset": 0, 00:15:13.170 "data_size": 65536 00:15:13.170 } 00:15:13.170 ] 00:15:13.170 }' 00:15:13.170 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.170 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.170 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.429 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.429 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:13.429 20:11:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.429 20:11:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.429 [2024-10-17 20:11:58.848831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.429 20:11:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.429 20:11:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:13.429 [2024-10-17 20:11:58.897874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:13.429 [2024-10-17 20:11:58.900291] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.429 [2024-10-17 20:11:59.008002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:13.429 [2024-10-17 20:11:59.008800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:13.689 [2024-10-17 20:11:59.227362] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:13.689 [2024-10-17 20:11:59.228132] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:13.948 131.33 IOPS, 394.00 MiB/s [2024-10-17T20:11:59.602Z] [2024-10-17 20:11:59.442924] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:13.948 [2024-10-17 20:11:59.443680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:14.206 [2024-10-17 20:11:59.685893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.465 "name": "raid_bdev1", 00:15:14.465 "uuid": "dd56d4b1-1cc5-4cb0-8b86-27556e32bc9d", 00:15:14.465 "strip_size_kb": 0, 00:15:14.465 "state": "online", 00:15:14.465 "raid_level": "raid1", 00:15:14.465 "superblock": false, 00:15:14.465 "num_base_bdevs": 2, 00:15:14.465 "num_base_bdevs_discovered": 2, 00:15:14.465 "num_base_bdevs_operational": 2, 00:15:14.465 "process": { 00:15:14.465 "type": "rebuild", 00:15:14.465 "target": "spare", 00:15:14.465 "progress": { 00:15:14.465 "blocks": 10240, 00:15:14.465 "percent": 15 00:15:14.465 } 00:15:14.465 }, 00:15:14.465 "base_bdevs_list": [ 00:15:14.465 { 00:15:14.465 "name": "spare", 00:15:14.465 "uuid": "d38d3aa8-b210-5806-8421-a07a3f42058e", 00:15:14.465 "is_configured": true, 00:15:14.465 "data_offset": 0, 00:15:14.465 "data_size": 65536 00:15:14.465 }, 00:15:14.465 { 00:15:14.465 "name": "BaseBdev2", 00:15:14.465 "uuid": "26f973df-27fb-5c21-968b-f5b9dbaa0ac5", 00:15:14.465 "is_configured": true, 00:15:14.465 "data_offset": 0, 00:15:14.465 "data_size": 65536 00:15:14.465 } 00:15:14.465 ] 00:15:14.465 }' 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.465 20:11:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=435 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.465 20:12:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.466 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.466 "name": "raid_bdev1", 00:15:14.466 "uuid": "dd56d4b1-1cc5-4cb0-8b86-27556e32bc9d", 00:15:14.466 "strip_size_kb": 0, 00:15:14.466 "state": "online", 00:15:14.466 "raid_level": "raid1", 00:15:14.466 "superblock": false, 00:15:14.466 "num_base_bdevs": 2, 00:15:14.466 "num_base_bdevs_discovered": 2, 00:15:14.466 "num_base_bdevs_operational": 2, 00:15:14.466 "process": { 00:15:14.466 "type": "rebuild", 00:15:14.466 "target": "spare", 00:15:14.466 "progress": { 00:15:14.466 "blocks": 14336, 00:15:14.466 "percent": 21 00:15:14.466 } 00:15:14.466 }, 00:15:14.466 "base_bdevs_list": [ 00:15:14.466 { 00:15:14.466 "name": "spare", 00:15:14.466 "uuid": "d38d3aa8-b210-5806-8421-a07a3f42058e", 00:15:14.466 "is_configured": true, 00:15:14.466 "data_offset": 0, 00:15:14.466 "data_size": 65536 00:15:14.466 }, 00:15:14.466 { 00:15:14.466 "name": "BaseBdev2", 00:15:14.466 "uuid": "26f973df-27fb-5c21-968b-f5b9dbaa0ac5", 00:15:14.466 "is_configured": true, 00:15:14.466 "data_offset": 0, 00:15:14.466 "data_size": 65536 00:15:14.466 } 00:15:14.466 ] 00:15:14.466 }' 00:15:14.466 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.725 [2024-10-17 20:12:00.126103] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:14.725 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.725 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.725 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.725 20:12:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.983 125.75 IOPS, 377.25 MiB/s [2024-10-17T20:12:00.637Z] [2024-10-17 20:12:00.475332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.919 "name": "raid_bdev1", 00:15:15.919 "uuid": "dd56d4b1-1cc5-4cb0-8b86-27556e32bc9d", 00:15:15.919 "strip_size_kb": 0, 00:15:15.919 "state": "online", 00:15:15.919 "raid_level": "raid1", 00:15:15.919 "superblock": false, 00:15:15.919 "num_base_bdevs": 2, 00:15:15.919 "num_base_bdevs_discovered": 2, 00:15:15.919 "num_base_bdevs_operational": 2, 00:15:15.919 "process": { 00:15:15.919 "type": "rebuild", 00:15:15.919 "target": "spare", 00:15:15.919 "progress": { 00:15:15.919 "blocks": 32768, 00:15:15.919 "percent": 50 00:15:15.919 } 00:15:15.919 }, 00:15:15.919 "base_bdevs_list": [ 00:15:15.919 { 00:15:15.919 "name": "spare", 00:15:15.919 "uuid": "d38d3aa8-b210-5806-8421-a07a3f42058e", 00:15:15.919 "is_configured": true, 00:15:15.919 "data_offset": 0, 00:15:15.919 "data_size": 65536 00:15:15.919 }, 00:15:15.919 { 00:15:15.919 "name": "BaseBdev2", 00:15:15.919 "uuid": "26f973df-27fb-5c21-968b-f5b9dbaa0ac5", 00:15:15.919 "is_configured": true, 00:15:15.919 "data_offset": 0, 00:15:15.919 "data_size": 65536 00:15:15.919 } 00:15:15.919 ] 00:15:15.919 }' 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.919 [2024-10-17 20:12:01.281841] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.919 109.60 IOPS, 328.80 MiB/s [2024-10-17T20:12:01.573Z] 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.919 20:12:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.177 [2024-10-17 20:12:01.611569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:16.746 [2024-10-17 20:12:02.107090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:16.746 97.33 IOPS, 292.00 MiB/s [2024-10-17T20:12:02.400Z] 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.746 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.746 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.746 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.746 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.746 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.746 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.746 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.746 20:12:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.746 20:12:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.746 20:12:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.007 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.007 "name": "raid_bdev1", 00:15:17.007 "uuid": "dd56d4b1-1cc5-4cb0-8b86-27556e32bc9d", 00:15:17.007 "strip_size_kb": 0, 00:15:17.007 "state": "online", 00:15:17.007 "raid_level": "raid1", 00:15:17.007 "superblock": false, 00:15:17.007 "num_base_bdevs": 2, 00:15:17.007 "num_base_bdevs_discovered": 2, 00:15:17.007 "num_base_bdevs_operational": 2, 00:15:17.007 "process": { 00:15:17.007 "type": "rebuild", 00:15:17.007 "target": "spare", 00:15:17.007 "progress": { 00:15:17.007 "blocks": 49152, 00:15:17.007 "percent": 75 00:15:17.007 } 00:15:17.007 }, 00:15:17.007 "base_bdevs_list": [ 00:15:17.007 { 00:15:17.007 "name": "spare", 00:15:17.007 "uuid": "d38d3aa8-b210-5806-8421-a07a3f42058e", 00:15:17.007 "is_configured": true, 00:15:17.007 "data_offset": 0, 00:15:17.007 "data_size": 65536 00:15:17.007 }, 00:15:17.007 { 00:15:17.007 "name": "BaseBdev2", 00:15:17.007 "uuid": "26f973df-27fb-5c21-968b-f5b9dbaa0ac5", 00:15:17.007 "is_configured": true, 00:15:17.007 "data_offset": 0, 00:15:17.007 "data_size": 65536 00:15:17.007 } 00:15:17.007 ] 00:15:17.007 }' 00:15:17.007 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.007 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.007 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.007 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.007 20:12:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.954 [2024-10-17 20:12:03.240256] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:17.954 89.14 IOPS, 267.43 MiB/s [2024-10-17T20:12:03.608Z] [2024-10-17 20:12:03.340310] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:17.954 [2024-10-17 20:12:03.350526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.954 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.954 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.954 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.954 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.954 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.954 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.955 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.955 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.955 20:12:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.955 20:12:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.955 20:12:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.955 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.955 "name": "raid_bdev1", 00:15:17.955 "uuid": "dd56d4b1-1cc5-4cb0-8b86-27556e32bc9d", 00:15:17.955 "strip_size_kb": 0, 00:15:17.955 "state": "online", 00:15:17.955 "raid_level": "raid1", 00:15:17.955 "superblock": false, 00:15:17.955 "num_base_bdevs": 2, 00:15:17.955 "num_base_bdevs_discovered": 2, 00:15:17.955 "num_base_bdevs_operational": 2, 00:15:17.955 "base_bdevs_list": [ 00:15:17.955 { 00:15:17.955 "name": "spare", 00:15:17.955 "uuid": "d38d3aa8-b210-5806-8421-a07a3f42058e", 00:15:17.955 "is_configured": true, 00:15:17.955 "data_offset": 0, 00:15:17.955 "data_size": 65536 00:15:17.955 }, 00:15:17.955 { 00:15:17.955 "name": "BaseBdev2", 00:15:17.955 "uuid": "26f973df-27fb-5c21-968b-f5b9dbaa0ac5", 00:15:17.955 "is_configured": true, 00:15:17.955 "data_offset": 0, 00:15:17.955 "data_size": 65536 00:15:17.955 } 00:15:17.955 ] 00:15:17.955 }' 00:15:17.955 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.214 "name": "raid_bdev1", 00:15:18.214 "uuid": "dd56d4b1-1cc5-4cb0-8b86-27556e32bc9d", 00:15:18.214 "strip_size_kb": 0, 00:15:18.214 "state": "online", 00:15:18.214 "raid_level": "raid1", 00:15:18.214 "superblock": false, 00:15:18.214 "num_base_bdevs": 2, 00:15:18.214 "num_base_bdevs_discovered": 2, 00:15:18.214 "num_base_bdevs_operational": 2, 00:15:18.214 "base_bdevs_list": [ 00:15:18.214 { 00:15:18.214 "name": "spare", 00:15:18.214 "uuid": "d38d3aa8-b210-5806-8421-a07a3f42058e", 00:15:18.214 "is_configured": true, 00:15:18.214 "data_offset": 0, 00:15:18.214 "data_size": 65536 00:15:18.214 }, 00:15:18.214 { 00:15:18.214 "name": "BaseBdev2", 00:15:18.214 "uuid": "26f973df-27fb-5c21-968b-f5b9dbaa0ac5", 00:15:18.214 "is_configured": true, 00:15:18.214 "data_offset": 0, 00:15:18.214 "data_size": 65536 00:15:18.214 } 00:15:18.214 ] 00:15:18.214 }' 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.214 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.474 "name": "raid_bdev1", 00:15:18.474 "uuid": "dd56d4b1-1cc5-4cb0-8b86-27556e32bc9d", 00:15:18.474 "strip_size_kb": 0, 00:15:18.474 "state": "online", 00:15:18.474 "raid_level": "raid1", 00:15:18.474 "superblock": false, 00:15:18.474 "num_base_bdevs": 2, 00:15:18.474 "num_base_bdevs_discovered": 2, 00:15:18.474 "num_base_bdevs_operational": 2, 00:15:18.474 "base_bdevs_list": [ 00:15:18.474 { 00:15:18.474 "name": "spare", 00:15:18.474 "uuid": "d38d3aa8-b210-5806-8421-a07a3f42058e", 00:15:18.474 "is_configured": true, 00:15:18.474 "data_offset": 0, 00:15:18.474 "data_size": 65536 00:15:18.474 }, 00:15:18.474 { 00:15:18.474 "name": "BaseBdev2", 00:15:18.474 "uuid": "26f973df-27fb-5c21-968b-f5b9dbaa0ac5", 00:15:18.474 "is_configured": true, 00:15:18.474 "data_offset": 0, 00:15:18.474 "data_size": 65536 00:15:18.474 } 00:15:18.474 ] 00:15:18.474 }' 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.474 20:12:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.733 81.62 IOPS, 244.88 MiB/s [2024-10-17T20:12:04.387Z] 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:18.733 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.733 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.733 [2024-10-17 20:12:04.380227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.733 [2024-10-17 20:12:04.380268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.991 00:15:18.991 Latency(us) 00:15:18.991 [2024-10-17T20:12:04.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.991 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:18.991 raid_bdev1 : 8.14 80.82 242.46 0.00 0.00 16585.53 258.79 111053.73 00:15:18.991 [2024-10-17T20:12:04.645Z] =================================================================================================================== 00:15:18.991 [2024-10-17T20:12:04.645Z] Total : 80.82 242.46 0.00 0.00 16585.53 258.79 111053.73 00:15:18.991 [2024-10-17 20:12:04.470753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.991 [2024-10-17 20:12:04.470822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.991 [2024-10-17 20:12:04.470914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.991 [2024-10-17 20:12:04.470937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:18.991 { 00:15:18.991 "results": [ 00:15:18.991 { 00:15:18.991 "job": "raid_bdev1", 00:15:18.991 "core_mask": "0x1", 00:15:18.991 "workload": "randrw", 00:15:18.991 "percentage": 50, 00:15:18.991 "status": "finished", 00:15:18.991 "queue_depth": 2, 00:15:18.991 "io_size": 3145728, 00:15:18.991 "runtime": 8.141477, 00:15:18.991 "iops": 80.82071594625889, 00:15:18.991 "mibps": 242.46214783877667, 00:15:18.991 "io_failed": 0, 00:15:18.991 "io_timeout": 0, 00:15:18.991 "avg_latency_us": 16585.53332964907, 00:15:18.991 "min_latency_us": 258.7927272727273, 00:15:18.991 "max_latency_us": 111053.73090909091 00:15:18.991 } 00:15:18.991 ], 00:15:18.991 "core_count": 1 00:15:18.991 } 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:18.991 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:18.992 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:18.992 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:18.992 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:18.992 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.992 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:19.251 /dev/nbd0 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.251 1+0 records in 00:15:19.251 1+0 records out 00:15:19.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284578 s, 14.4 MB/s 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.251 20:12:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:19.510 /dev/nbd1 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.768 1+0 records in 00:15:19.768 1+0 records out 00:15:19.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351648 s, 11.6 MB/s 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.768 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.337 20:12:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76593 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 76593 ']' 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 76593 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76593 00:15:20.596 killing process with pid 76593 00:15:20.596 Received shutdown signal, test time was about 9.735983 seconds 00:15:20.596 00:15:20.596 Latency(us) 00:15:20.596 [2024-10-17T20:12:06.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.596 [2024-10-17T20:12:06.250Z] =================================================================================================================== 00:15:20.596 [2024-10-17T20:12:06.250Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76593' 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 76593 00:15:20.596 [2024-10-17 20:12:06.046602] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.596 20:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 76593 00:15:20.596 [2024-10-17 20:12:06.243503] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.030 20:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:22.030 00:15:22.030 real 0m13.052s 00:15:22.030 user 0m17.147s 00:15:22.030 sys 0m1.437s 00:15:22.030 20:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:22.030 20:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.030 ************************************ 00:15:22.031 END TEST raid_rebuild_test_io 00:15:22.031 ************************************ 00:15:22.031 20:12:07 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:15:22.031 20:12:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:22.031 20:12:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:22.031 20:12:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.031 ************************************ 00:15:22.031 START TEST raid_rebuild_test_sb_io 00:15:22.031 ************************************ 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76975 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76975 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 76975 ']' 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.031 20:12:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.031 [2024-10-17 20:12:07.477051] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:15:22.031 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:22.031 Zero copy mechanism will not be used. 00:15:22.031 [2024-10-17 20:12:07.477258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76975 ] 00:15:22.031 [2024-10-17 20:12:07.660484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.290 [2024-10-17 20:12:07.787771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.549 [2024-10-17 20:12:07.985041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.549 [2024-10-17 20:12:07.985144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.807 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:22.807 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:15:22.807 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.807 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:22.807 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.807 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.066 BaseBdev1_malloc 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.066 [2024-10-17 20:12:08.505173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:23.066 [2024-10-17 20:12:08.505271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.066 [2024-10-17 20:12:08.505305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:23.066 [2024-10-17 20:12:08.505324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.066 [2024-10-17 20:12:08.508049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.066 [2024-10-17 20:12:08.508144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:23.066 BaseBdev1 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.066 BaseBdev2_malloc 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.066 [2024-10-17 20:12:08.550281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:23.066 [2024-10-17 20:12:08.550418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.066 [2024-10-17 20:12:08.550447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:23.066 [2024-10-17 20:12:08.550465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.066 [2024-10-17 20:12:08.553239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.066 [2024-10-17 20:12:08.553310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:23.066 BaseBdev2 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.066 spare_malloc 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.066 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.066 spare_delay 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.067 [2024-10-17 20:12:08.625814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:23.067 [2024-10-17 20:12:08.625918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.067 [2024-10-17 20:12:08.625956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:23.067 [2024-10-17 20:12:08.625973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.067 [2024-10-17 20:12:08.628807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.067 [2024-10-17 20:12:08.628872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:23.067 spare 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.067 [2024-10-17 20:12:08.633904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.067 [2024-10-17 20:12:08.636388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.067 [2024-10-17 20:12:08.636654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:23.067 [2024-10-17 20:12:08.636715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:23.067 [2024-10-17 20:12:08.637073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:23.067 [2024-10-17 20:12:08.637334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:23.067 [2024-10-17 20:12:08.637385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:23.067 [2024-10-17 20:12:08.637561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.067 "name": "raid_bdev1", 00:15:23.067 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:23.067 "strip_size_kb": 0, 00:15:23.067 "state": "online", 00:15:23.067 "raid_level": "raid1", 00:15:23.067 "superblock": true, 00:15:23.067 "num_base_bdevs": 2, 00:15:23.067 "num_base_bdevs_discovered": 2, 00:15:23.067 "num_base_bdevs_operational": 2, 00:15:23.067 "base_bdevs_list": [ 00:15:23.067 { 00:15:23.067 "name": "BaseBdev1", 00:15:23.067 "uuid": "3782b439-9e19-5b10-920d-6cdfa9839e9c", 00:15:23.067 "is_configured": true, 00:15:23.067 "data_offset": 2048, 00:15:23.067 "data_size": 63488 00:15:23.067 }, 00:15:23.067 { 00:15:23.067 "name": "BaseBdev2", 00:15:23.067 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:23.067 "is_configured": true, 00:15:23.067 "data_offset": 2048, 00:15:23.067 "data_size": 63488 00:15:23.067 } 00:15:23.067 ] 00:15:23.067 }' 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.067 20:12:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:23.635 [2024-10-17 20:12:09.170483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.635 [2024-10-17 20:12:09.278091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.635 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.895 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.895 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.895 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.895 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.895 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.895 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.895 "name": "raid_bdev1", 00:15:23.895 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:23.895 "strip_size_kb": 0, 00:15:23.895 "state": "online", 00:15:23.895 "raid_level": "raid1", 00:15:23.895 "superblock": true, 00:15:23.895 "num_base_bdevs": 2, 00:15:23.895 "num_base_bdevs_discovered": 1, 00:15:23.895 "num_base_bdevs_operational": 1, 00:15:23.895 "base_bdevs_list": [ 00:15:23.895 { 00:15:23.895 "name": null, 00:15:23.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.895 "is_configured": false, 00:15:23.895 "data_offset": 0, 00:15:23.895 "data_size": 63488 00:15:23.895 }, 00:15:23.895 { 00:15:23.895 "name": "BaseBdev2", 00:15:23.895 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:23.895 "is_configured": true, 00:15:23.895 "data_offset": 2048, 00:15:23.895 "data_size": 63488 00:15:23.895 } 00:15:23.895 ] 00:15:23.895 }' 00:15:23.895 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.895 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.895 [2024-10-17 20:12:09.410329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:23.895 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:23.895 Zero copy mechanism will not be used. 00:15:23.895 Running I/O for 60 seconds... 00:15:24.153 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.153 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.153 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.413 [2024-10-17 20:12:09.806684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.413 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.413 20:12:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:24.413 [2024-10-17 20:12:09.890062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:24.413 [2024-10-17 20:12:09.892624] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:24.413 [2024-10-17 20:12:10.007549] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:24.413 [2024-10-17 20:12:10.008070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:24.671 [2024-10-17 20:12:10.209856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:24.671 [2024-10-17 20:12:10.210343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:25.189 168.00 IOPS, 504.00 MiB/s [2024-10-17T20:12:10.843Z] [2024-10-17 20:12:10.585908] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:25.189 [2024-10-17 20:12:10.586615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:25.189 [2024-10-17 20:12:10.797908] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:25.189 [2024-10-17 20:12:10.798326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.448 "name": "raid_bdev1", 00:15:25.448 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:25.448 "strip_size_kb": 0, 00:15:25.448 "state": "online", 00:15:25.448 "raid_level": "raid1", 00:15:25.448 "superblock": true, 00:15:25.448 "num_base_bdevs": 2, 00:15:25.448 "num_base_bdevs_discovered": 2, 00:15:25.448 "num_base_bdevs_operational": 2, 00:15:25.448 "process": { 00:15:25.448 "type": "rebuild", 00:15:25.448 "target": "spare", 00:15:25.448 "progress": { 00:15:25.448 "blocks": 10240, 00:15:25.448 "percent": 16 00:15:25.448 } 00:15:25.448 }, 00:15:25.448 "base_bdevs_list": [ 00:15:25.448 { 00:15:25.448 "name": "spare", 00:15:25.448 "uuid": "c0791c10-cf29-50fa-9ef1-b0d31c5906c0", 00:15:25.448 "is_configured": true, 00:15:25.448 "data_offset": 2048, 00:15:25.448 "data_size": 63488 00:15:25.448 }, 00:15:25.448 { 00:15:25.448 "name": "BaseBdev2", 00:15:25.448 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:25.448 "is_configured": true, 00:15:25.448 "data_offset": 2048, 00:15:25.448 "data_size": 63488 00:15:25.448 } 00:15:25.448 ] 00:15:25.448 }' 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.448 20:12:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.448 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.448 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:25.448 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.448 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.448 [2024-10-17 20:12:11.017303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.707 [2024-10-17 20:12:11.120471] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.707 [2024-10-17 20:12:11.138138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.707 [2024-10-17 20:12:11.138201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.707 [2024-10-17 20:12:11.138221] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:25.707 [2024-10-17 20:12:11.187212] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.707 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.707 "name": "raid_bdev1", 00:15:25.707 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:25.707 "strip_size_kb": 0, 00:15:25.707 "state": "online", 00:15:25.707 "raid_level": "raid1", 00:15:25.708 "superblock": true, 00:15:25.708 "num_base_bdevs": 2, 00:15:25.708 "num_base_bdevs_discovered": 1, 00:15:25.708 "num_base_bdevs_operational": 1, 00:15:25.708 "base_bdevs_list": [ 00:15:25.708 { 00:15:25.708 "name": null, 00:15:25.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.708 "is_configured": false, 00:15:25.708 "data_offset": 0, 00:15:25.708 "data_size": 63488 00:15:25.708 }, 00:15:25.708 { 00:15:25.708 "name": "BaseBdev2", 00:15:25.708 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:25.708 "is_configured": true, 00:15:25.708 "data_offset": 2048, 00:15:25.708 "data_size": 63488 00:15:25.708 } 00:15:25.708 ] 00:15:25.708 }' 00:15:25.708 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.708 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.226 137.00 IOPS, 411.00 MiB/s [2024-10-17T20:12:11.880Z] 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.226 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.226 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.226 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.226 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.226 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.226 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.226 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.226 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.226 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.226 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.226 "name": "raid_bdev1", 00:15:26.226 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:26.226 "strip_size_kb": 0, 00:15:26.226 "state": "online", 00:15:26.226 "raid_level": "raid1", 00:15:26.226 "superblock": true, 00:15:26.226 "num_base_bdevs": 2, 00:15:26.226 "num_base_bdevs_discovered": 1, 00:15:26.226 "num_base_bdevs_operational": 1, 00:15:26.226 "base_bdevs_list": [ 00:15:26.226 { 00:15:26.226 "name": null, 00:15:26.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.226 "is_configured": false, 00:15:26.226 "data_offset": 0, 00:15:26.226 "data_size": 63488 00:15:26.226 }, 00:15:26.226 { 00:15:26.226 "name": "BaseBdev2", 00:15:26.226 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:26.226 "is_configured": true, 00:15:26.226 "data_offset": 2048, 00:15:26.226 "data_size": 63488 00:15:26.226 } 00:15:26.226 ] 00:15:26.226 }' 00:15:26.226 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.226 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.226 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.485 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.485 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:26.485 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.485 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.485 [2024-10-17 20:12:11.904905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.485 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.485 20:12:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:26.485 [2024-10-17 20:12:11.972315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:26.485 [2024-10-17 20:12:11.974996] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:26.485 [2024-10-17 20:12:12.099160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:26.485 [2024-10-17 20:12:12.099896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:26.744 [2024-10-17 20:12:12.324902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:26.744 [2024-10-17 20:12:12.325362] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:27.260 137.00 IOPS, 411.00 MiB/s [2024-10-17T20:12:12.914Z] [2024-10-17 20:12:12.680968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:27.260 [2024-10-17 20:12:12.681767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:27.260 [2024-10-17 20:12:12.893482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:27.260 [2024-10-17 20:12:12.893931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:27.560 20:12:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.560 20:12:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.560 20:12:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.560 20:12:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.560 20:12:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.560 20:12:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.560 20:12:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.560 20:12:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.560 20:12:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.560 20:12:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.560 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.560 "name": "raid_bdev1", 00:15:27.560 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:27.560 "strip_size_kb": 0, 00:15:27.560 "state": "online", 00:15:27.560 "raid_level": "raid1", 00:15:27.560 "superblock": true, 00:15:27.561 "num_base_bdevs": 2, 00:15:27.561 "num_base_bdevs_discovered": 2, 00:15:27.561 "num_base_bdevs_operational": 2, 00:15:27.561 "process": { 00:15:27.561 "type": "rebuild", 00:15:27.561 "target": "spare", 00:15:27.561 "progress": { 00:15:27.561 "blocks": 10240, 00:15:27.561 "percent": 16 00:15:27.561 } 00:15:27.561 }, 00:15:27.561 "base_bdevs_list": [ 00:15:27.561 { 00:15:27.561 "name": "spare", 00:15:27.561 "uuid": "c0791c10-cf29-50fa-9ef1-b0d31c5906c0", 00:15:27.561 "is_configured": true, 00:15:27.561 "data_offset": 2048, 00:15:27.561 "data_size": 63488 00:15:27.561 }, 00:15:27.561 { 00:15:27.561 "name": "BaseBdev2", 00:15:27.561 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:27.561 "is_configured": true, 00:15:27.561 "data_offset": 2048, 00:15:27.561 "data_size": 63488 00:15:27.561 } 00:15:27.561 ] 00:15:27.561 }' 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:27.561 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=448 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.561 "name": "raid_bdev1", 00:15:27.561 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:27.561 "strip_size_kb": 0, 00:15:27.561 "state": "online", 00:15:27.561 "raid_level": "raid1", 00:15:27.561 "superblock": true, 00:15:27.561 "num_base_bdevs": 2, 00:15:27.561 "num_base_bdevs_discovered": 2, 00:15:27.561 "num_base_bdevs_operational": 2, 00:15:27.561 "process": { 00:15:27.561 "type": "rebuild", 00:15:27.561 "target": "spare", 00:15:27.561 "progress": { 00:15:27.561 "blocks": 12288, 00:15:27.561 "percent": 19 00:15:27.561 } 00:15:27.561 }, 00:15:27.561 "base_bdevs_list": [ 00:15:27.561 { 00:15:27.561 "name": "spare", 00:15:27.561 "uuid": "c0791c10-cf29-50fa-9ef1-b0d31c5906c0", 00:15:27.561 "is_configured": true, 00:15:27.561 "data_offset": 2048, 00:15:27.561 "data_size": 63488 00:15:27.561 }, 00:15:27.561 { 00:15:27.561 "name": "BaseBdev2", 00:15:27.561 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:27.561 "is_configured": true, 00:15:27.561 "data_offset": 2048, 00:15:27.561 "data_size": 63488 00:15:27.561 } 00:15:27.561 ] 00:15:27.561 }' 00:15:27.561 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.819 [2024-10-17 20:12:13.216108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:27.819 [2024-10-17 20:12:13.216738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:27.819 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.819 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.819 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.819 20:12:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.819 125.75 IOPS, 377.25 MiB/s [2024-10-17T20:12:13.473Z] [2024-10-17 20:12:13.426558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:27.819 [2024-10-17 20:12:13.427018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:28.077 [2024-10-17 20:12:13.715783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:28.336 [2024-10-17 20:12:13.824522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:28.336 [2024-10-17 20:12:13.824974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.903 "name": "raid_bdev1", 00:15:28.903 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:28.903 "strip_size_kb": 0, 00:15:28.903 "state": "online", 00:15:28.903 "raid_level": "raid1", 00:15:28.903 "superblock": true, 00:15:28.903 "num_base_bdevs": 2, 00:15:28.903 "num_base_bdevs_discovered": 2, 00:15:28.903 "num_base_bdevs_operational": 2, 00:15:28.903 "process": { 00:15:28.903 "type": "rebuild", 00:15:28.903 "target": "spare", 00:15:28.903 "progress": { 00:15:28.903 "blocks": 28672, 00:15:28.903 "percent": 45 00:15:28.903 } 00:15:28.903 }, 00:15:28.903 "base_bdevs_list": [ 00:15:28.903 { 00:15:28.903 "name": "spare", 00:15:28.903 "uuid": "c0791c10-cf29-50fa-9ef1-b0d31c5906c0", 00:15:28.903 "is_configured": true, 00:15:28.903 "data_offset": 2048, 00:15:28.903 "data_size": 63488 00:15:28.903 }, 00:15:28.903 { 00:15:28.903 "name": "BaseBdev2", 00:15:28.903 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:28.903 "is_configured": true, 00:15:28.903 "data_offset": 2048, 00:15:28.903 "data_size": 63488 00:15:28.903 } 00:15:28.903 ] 00:15:28.903 }' 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.903 113.20 IOPS, 339.60 MiB/s [2024-10-17T20:12:14.557Z] 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.903 20:12:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.903 [2024-10-17 20:12:14.544194] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:29.162 [2024-10-17 20:12:14.772194] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:29.420 [2024-10-17 20:12:15.019381] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:29.987 101.67 IOPS, 305.00 MiB/s [2024-10-17T20:12:15.641Z] 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.987 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.987 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.987 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.987 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.987 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.987 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.987 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.987 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.987 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.987 [2024-10-17 20:12:15.472647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:29.987 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.987 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.987 "name": "raid_bdev1", 00:15:29.987 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:29.987 "strip_size_kb": 0, 00:15:29.987 "state": "online", 00:15:29.987 "raid_level": "raid1", 00:15:29.987 "superblock": true, 00:15:29.987 "num_base_bdevs": 2, 00:15:29.987 "num_base_bdevs_discovered": 2, 00:15:29.987 "num_base_bdevs_operational": 2, 00:15:29.987 "process": { 00:15:29.987 "type": "rebuild", 00:15:29.987 "target": "spare", 00:15:29.987 "progress": { 00:15:29.987 "blocks": 43008, 00:15:29.987 "percent": 67 00:15:29.987 } 00:15:29.987 }, 00:15:29.987 "base_bdevs_list": [ 00:15:29.987 { 00:15:29.987 "name": "spare", 00:15:29.987 "uuid": "c0791c10-cf29-50fa-9ef1-b0d31c5906c0", 00:15:29.987 "is_configured": true, 00:15:29.987 "data_offset": 2048, 00:15:29.988 "data_size": 63488 00:15:29.988 }, 00:15:29.988 { 00:15:29.988 "name": "BaseBdev2", 00:15:29.988 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:29.988 "is_configured": true, 00:15:29.988 "data_offset": 2048, 00:15:29.988 "data_size": 63488 00:15:29.988 } 00:15:29.988 ] 00:15:29.988 }' 00:15:29.988 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.988 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.988 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.988 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.988 20:12:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.246 [2024-10-17 20:12:15.684741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:30.505 [2024-10-17 20:12:15.916456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:30.505 [2024-10-17 20:12:16.127595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:31.072 93.57 IOPS, 280.71 MiB/s [2024-10-17T20:12:16.726Z] [2024-10-17 20:12:16.546979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.072 "name": "raid_bdev1", 00:15:31.072 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:31.072 "strip_size_kb": 0, 00:15:31.072 "state": "online", 00:15:31.072 "raid_level": "raid1", 00:15:31.072 "superblock": true, 00:15:31.072 "num_base_bdevs": 2, 00:15:31.072 "num_base_bdevs_discovered": 2, 00:15:31.072 "num_base_bdevs_operational": 2, 00:15:31.072 "process": { 00:15:31.072 "type": "rebuild", 00:15:31.072 "target": "spare", 00:15:31.072 "progress": { 00:15:31.072 "blocks": 59392, 00:15:31.072 "percent": 93 00:15:31.072 } 00:15:31.072 }, 00:15:31.072 "base_bdevs_list": [ 00:15:31.072 { 00:15:31.072 "name": "spare", 00:15:31.072 "uuid": "c0791c10-cf29-50fa-9ef1-b0d31c5906c0", 00:15:31.072 "is_configured": true, 00:15:31.072 "data_offset": 2048, 00:15:31.072 "data_size": 63488 00:15:31.072 }, 00:15:31.072 { 00:15:31.072 "name": "BaseBdev2", 00:15:31.072 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:31.072 "is_configured": true, 00:15:31.072 "data_offset": 2048, 00:15:31.072 "data_size": 63488 00:15:31.072 } 00:15:31.072 ] 00:15:31.072 }' 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.072 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.362 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.362 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.362 20:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.362 [2024-10-17 20:12:16.777433] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:31.362 [2024-10-17 20:12:16.877353] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:31.362 [2024-10-17 20:12:16.879305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.189 87.38 IOPS, 262.12 MiB/s [2024-10-17T20:12:17.843Z] 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.189 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.189 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.189 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.189 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.189 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.189 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.189 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.189 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.189 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.189 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.189 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.189 "name": "raid_bdev1", 00:15:32.189 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:32.189 "strip_size_kb": 0, 00:15:32.189 "state": "online", 00:15:32.189 "raid_level": "raid1", 00:15:32.189 "superblock": true, 00:15:32.189 "num_base_bdevs": 2, 00:15:32.189 "num_base_bdevs_discovered": 2, 00:15:32.189 "num_base_bdevs_operational": 2, 00:15:32.189 "base_bdevs_list": [ 00:15:32.189 { 00:15:32.189 "name": "spare", 00:15:32.189 "uuid": "c0791c10-cf29-50fa-9ef1-b0d31c5906c0", 00:15:32.189 "is_configured": true, 00:15:32.189 "data_offset": 2048, 00:15:32.189 "data_size": 63488 00:15:32.189 }, 00:15:32.189 { 00:15:32.189 "name": "BaseBdev2", 00:15:32.189 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:32.189 "is_configured": true, 00:15:32.189 "data_offset": 2048, 00:15:32.189 "data_size": 63488 00:15:32.189 } 00:15:32.189 ] 00:15:32.189 }' 00:15:32.189 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.449 "name": "raid_bdev1", 00:15:32.449 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:32.449 "strip_size_kb": 0, 00:15:32.449 "state": "online", 00:15:32.449 "raid_level": "raid1", 00:15:32.449 "superblock": true, 00:15:32.449 "num_base_bdevs": 2, 00:15:32.449 "num_base_bdevs_discovered": 2, 00:15:32.449 "num_base_bdevs_operational": 2, 00:15:32.449 "base_bdevs_list": [ 00:15:32.449 { 00:15:32.449 "name": "spare", 00:15:32.449 "uuid": "c0791c10-cf29-50fa-9ef1-b0d31c5906c0", 00:15:32.449 "is_configured": true, 00:15:32.449 "data_offset": 2048, 00:15:32.449 "data_size": 63488 00:15:32.449 }, 00:15:32.449 { 00:15:32.449 "name": "BaseBdev2", 00:15:32.449 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:32.449 "is_configured": true, 00:15:32.449 "data_offset": 2048, 00:15:32.449 "data_size": 63488 00:15:32.449 } 00:15:32.449 ] 00:15:32.449 }' 00:15:32.449 20:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.449 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.708 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.708 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.708 "name": "raid_bdev1", 00:15:32.708 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:32.708 "strip_size_kb": 0, 00:15:32.708 "state": "online", 00:15:32.708 "raid_level": "raid1", 00:15:32.708 "superblock": true, 00:15:32.708 "num_base_bdevs": 2, 00:15:32.708 "num_base_bdevs_discovered": 2, 00:15:32.708 "num_base_bdevs_operational": 2, 00:15:32.708 "base_bdevs_list": [ 00:15:32.708 { 00:15:32.708 "name": "spare", 00:15:32.708 "uuid": "c0791c10-cf29-50fa-9ef1-b0d31c5906c0", 00:15:32.708 "is_configured": true, 00:15:32.708 "data_offset": 2048, 00:15:32.708 "data_size": 63488 00:15:32.708 }, 00:15:32.708 { 00:15:32.708 "name": "BaseBdev2", 00:15:32.708 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:32.708 "is_configured": true, 00:15:32.708 "data_offset": 2048, 00:15:32.708 "data_size": 63488 00:15:32.708 } 00:15:32.708 ] 00:15:32.708 }' 00:15:32.708 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.708 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.226 80.67 IOPS, 242.00 MiB/s [2024-10-17T20:12:18.880Z] 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.226 [2024-10-17 20:12:18.626875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:33.226 [2024-10-17 20:12:18.626923] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.226 00:15:33.226 Latency(us) 00:15:33.226 [2024-10-17T20:12:18.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.226 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:33.226 raid_bdev1 : 9.26 79.39 238.18 0.00 0.00 16619.76 269.96 111053.73 00:15:33.226 [2024-10-17T20:12:18.880Z] =================================================================================================================== 00:15:33.226 [2024-10-17T20:12:18.880Z] Total : 79.39 238.18 0.00 0.00 16619.76 269.96 111053.73 00:15:33.226 [2024-10-17 20:12:18.690510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.226 [2024-10-17 20:12:18.690604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.226 [2024-10-17 20:12:18.690710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.226 [2024-10-17 20:12:18.690730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:33.226 { 00:15:33.226 "results": [ 00:15:33.226 { 00:15:33.226 "job": "raid_bdev1", 00:15:33.226 "core_mask": "0x1", 00:15:33.226 "workload": "randrw", 00:15:33.226 "percentage": 50, 00:15:33.226 "status": "finished", 00:15:33.226 "queue_depth": 2, 00:15:33.226 "io_size": 3145728, 00:15:33.226 "runtime": 9.257611, 00:15:33.226 "iops": 79.39413310842289, 00:15:33.226 "mibps": 238.18239932526868, 00:15:33.226 "io_failed": 0, 00:15:33.226 "io_timeout": 0, 00:15:33.226 "avg_latency_us": 16619.760643166355, 00:15:33.226 "min_latency_us": 269.96363636363634, 00:15:33.226 "max_latency_us": 111053.73090909091 00:15:33.226 } 00:15:33.226 ], 00:15:33.226 "core_count": 1 00:15:33.226 } 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.226 20:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:33.485 /dev/nbd0 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.485 1+0 records in 00:15:33.485 1+0 records out 00:15:33.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355542 s, 11.5 MB/s 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.485 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:34.050 /dev/nbd1 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.050 1+0 records in 00:15:34.050 1+0 records out 00:15:34.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356048 s, 11.5 MB/s 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.050 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.308 20:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:34.566 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.566 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.566 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.566 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.566 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.566 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.566 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:34.566 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.566 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:34.566 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:34.566 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.566 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.825 [2024-10-17 20:12:20.223511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:34.825 [2024-10-17 20:12:20.223609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.825 [2024-10-17 20:12:20.223640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:34.825 [2024-10-17 20:12:20.223657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.825 [2024-10-17 20:12:20.226783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.825 [2024-10-17 20:12:20.226864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:34.825 [2024-10-17 20:12:20.227013] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:34.825 [2024-10-17 20:12:20.227097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.825 [2024-10-17 20:12:20.227279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.825 spare 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.825 [2024-10-17 20:12:20.327456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:34.825 [2024-10-17 20:12:20.327517] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:34.825 [2024-10-17 20:12:20.327967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:15:34.825 [2024-10-17 20:12:20.328250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:34.825 [2024-10-17 20:12:20.328295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:34.825 [2024-10-17 20:12:20.328555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.825 "name": "raid_bdev1", 00:15:34.825 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:34.825 "strip_size_kb": 0, 00:15:34.825 "state": "online", 00:15:34.825 "raid_level": "raid1", 00:15:34.825 "superblock": true, 00:15:34.825 "num_base_bdevs": 2, 00:15:34.825 "num_base_bdevs_discovered": 2, 00:15:34.825 "num_base_bdevs_operational": 2, 00:15:34.825 "base_bdevs_list": [ 00:15:34.825 { 00:15:34.825 "name": "spare", 00:15:34.825 "uuid": "c0791c10-cf29-50fa-9ef1-b0d31c5906c0", 00:15:34.825 "is_configured": true, 00:15:34.825 "data_offset": 2048, 00:15:34.825 "data_size": 63488 00:15:34.825 }, 00:15:34.825 { 00:15:34.825 "name": "BaseBdev2", 00:15:34.825 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:34.825 "is_configured": true, 00:15:34.825 "data_offset": 2048, 00:15:34.825 "data_size": 63488 00:15:34.825 } 00:15:34.825 ] 00:15:34.825 }' 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.825 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.390 "name": "raid_bdev1", 00:15:35.390 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:35.390 "strip_size_kb": 0, 00:15:35.390 "state": "online", 00:15:35.390 "raid_level": "raid1", 00:15:35.390 "superblock": true, 00:15:35.390 "num_base_bdevs": 2, 00:15:35.390 "num_base_bdevs_discovered": 2, 00:15:35.390 "num_base_bdevs_operational": 2, 00:15:35.390 "base_bdevs_list": [ 00:15:35.390 { 00:15:35.390 "name": "spare", 00:15:35.390 "uuid": "c0791c10-cf29-50fa-9ef1-b0d31c5906c0", 00:15:35.390 "is_configured": true, 00:15:35.390 "data_offset": 2048, 00:15:35.390 "data_size": 63488 00:15:35.390 }, 00:15:35.390 { 00:15:35.390 "name": "BaseBdev2", 00:15:35.390 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:35.390 "is_configured": true, 00:15:35.390 "data_offset": 2048, 00:15:35.390 "data_size": 63488 00:15:35.390 } 00:15:35.390 ] 00:15:35.390 }' 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:35.390 20:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.390 [2024-10-17 20:12:21.012769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.390 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.391 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.648 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.648 "name": "raid_bdev1", 00:15:35.648 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:35.648 "strip_size_kb": 0, 00:15:35.648 "state": "online", 00:15:35.648 "raid_level": "raid1", 00:15:35.648 "superblock": true, 00:15:35.648 "num_base_bdevs": 2, 00:15:35.648 "num_base_bdevs_discovered": 1, 00:15:35.648 "num_base_bdevs_operational": 1, 00:15:35.648 "base_bdevs_list": [ 00:15:35.648 { 00:15:35.648 "name": null, 00:15:35.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.648 "is_configured": false, 00:15:35.648 "data_offset": 0, 00:15:35.648 "data_size": 63488 00:15:35.648 }, 00:15:35.648 { 00:15:35.648 "name": "BaseBdev2", 00:15:35.648 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:35.648 "is_configured": true, 00:15:35.648 "data_offset": 2048, 00:15:35.648 "data_size": 63488 00:15:35.648 } 00:15:35.648 ] 00:15:35.648 }' 00:15:35.648 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.648 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.906 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.906 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.906 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.906 [2024-10-17 20:12:21.505042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.906 [2024-10-17 20:12:21.505311] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:35.906 [2024-10-17 20:12:21.505338] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:35.906 [2024-10-17 20:12:21.505390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.906 [2024-10-17 20:12:21.521613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:15:35.906 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.906 20:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:35.906 [2024-10-17 20:12:21.524156] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.283 "name": "raid_bdev1", 00:15:37.283 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:37.283 "strip_size_kb": 0, 00:15:37.283 "state": "online", 00:15:37.283 "raid_level": "raid1", 00:15:37.283 "superblock": true, 00:15:37.283 "num_base_bdevs": 2, 00:15:37.283 "num_base_bdevs_discovered": 2, 00:15:37.283 "num_base_bdevs_operational": 2, 00:15:37.283 "process": { 00:15:37.283 "type": "rebuild", 00:15:37.283 "target": "spare", 00:15:37.283 "progress": { 00:15:37.283 "blocks": 20480, 00:15:37.283 "percent": 32 00:15:37.283 } 00:15:37.283 }, 00:15:37.283 "base_bdevs_list": [ 00:15:37.283 { 00:15:37.283 "name": "spare", 00:15:37.283 "uuid": "c0791c10-cf29-50fa-9ef1-b0d31c5906c0", 00:15:37.283 "is_configured": true, 00:15:37.283 "data_offset": 2048, 00:15:37.283 "data_size": 63488 00:15:37.283 }, 00:15:37.283 { 00:15:37.283 "name": "BaseBdev2", 00:15:37.283 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:37.283 "is_configured": true, 00:15:37.283 "data_offset": 2048, 00:15:37.283 "data_size": 63488 00:15:37.283 } 00:15:37.283 ] 00:15:37.283 }' 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.283 [2024-10-17 20:12:22.693521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.283 [2024-10-17 20:12:22.733431] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:37.283 [2024-10-17 20:12:22.733542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.283 [2024-10-17 20:12:22.733571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.283 [2024-10-17 20:12:22.733582] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.283 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.283 "name": "raid_bdev1", 00:15:37.283 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:37.283 "strip_size_kb": 0, 00:15:37.283 "state": "online", 00:15:37.283 "raid_level": "raid1", 00:15:37.283 "superblock": true, 00:15:37.283 "num_base_bdevs": 2, 00:15:37.283 "num_base_bdevs_discovered": 1, 00:15:37.283 "num_base_bdevs_operational": 1, 00:15:37.283 "base_bdevs_list": [ 00:15:37.283 { 00:15:37.283 "name": null, 00:15:37.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.283 "is_configured": false, 00:15:37.283 "data_offset": 0, 00:15:37.283 "data_size": 63488 00:15:37.283 }, 00:15:37.283 { 00:15:37.283 "name": "BaseBdev2", 00:15:37.283 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:37.283 "is_configured": true, 00:15:37.283 "data_offset": 2048, 00:15:37.283 "data_size": 63488 00:15:37.284 } 00:15:37.284 ] 00:15:37.284 }' 00:15:37.284 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.284 20:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.851 20:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:37.851 20:12:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.851 20:12:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.851 [2024-10-17 20:12:23.291621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:37.851 [2024-10-17 20:12:23.291714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.851 [2024-10-17 20:12:23.291752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:37.851 [2024-10-17 20:12:23.291767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.851 [2024-10-17 20:12:23.292415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.851 [2024-10-17 20:12:23.292459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:37.851 [2024-10-17 20:12:23.292586] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:37.851 [2024-10-17 20:12:23.292605] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:37.851 [2024-10-17 20:12:23.292621] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:37.851 [2024-10-17 20:12:23.292651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.851 [2024-10-17 20:12:23.308864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:15:37.851 spare 00:15:37.851 20:12:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.851 20:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:37.851 [2024-10-17 20:12:23.311431] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:38.786 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.787 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.787 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.787 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.787 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.787 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.787 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.787 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.787 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.787 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.787 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.787 "name": "raid_bdev1", 00:15:38.787 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:38.787 "strip_size_kb": 0, 00:15:38.787 "state": "online", 00:15:38.787 "raid_level": "raid1", 00:15:38.787 "superblock": true, 00:15:38.787 "num_base_bdevs": 2, 00:15:38.787 "num_base_bdevs_discovered": 2, 00:15:38.787 "num_base_bdevs_operational": 2, 00:15:38.787 "process": { 00:15:38.787 "type": "rebuild", 00:15:38.787 "target": "spare", 00:15:38.787 "progress": { 00:15:38.787 "blocks": 20480, 00:15:38.787 "percent": 32 00:15:38.787 } 00:15:38.787 }, 00:15:38.787 "base_bdevs_list": [ 00:15:38.787 { 00:15:38.787 "name": "spare", 00:15:38.787 "uuid": "c0791c10-cf29-50fa-9ef1-b0d31c5906c0", 00:15:38.787 "is_configured": true, 00:15:38.787 "data_offset": 2048, 00:15:38.787 "data_size": 63488 00:15:38.787 }, 00:15:38.787 { 00:15:38.787 "name": "BaseBdev2", 00:15:38.787 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:38.787 "is_configured": true, 00:15:38.787 "data_offset": 2048, 00:15:38.787 "data_size": 63488 00:15:38.787 } 00:15:38.787 ] 00:15:38.787 }' 00:15:38.787 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.787 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.787 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.046 [2024-10-17 20:12:24.476942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.046 [2024-10-17 20:12:24.519769] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:39.046 [2024-10-17 20:12:24.519880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.046 [2024-10-17 20:12:24.519903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.046 [2024-10-17 20:12:24.519918] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.046 "name": "raid_bdev1", 00:15:39.046 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:39.046 "strip_size_kb": 0, 00:15:39.046 "state": "online", 00:15:39.046 "raid_level": "raid1", 00:15:39.046 "superblock": true, 00:15:39.046 "num_base_bdevs": 2, 00:15:39.046 "num_base_bdevs_discovered": 1, 00:15:39.046 "num_base_bdevs_operational": 1, 00:15:39.046 "base_bdevs_list": [ 00:15:39.046 { 00:15:39.046 "name": null, 00:15:39.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.046 "is_configured": false, 00:15:39.046 "data_offset": 0, 00:15:39.046 "data_size": 63488 00:15:39.046 }, 00:15:39.046 { 00:15:39.046 "name": "BaseBdev2", 00:15:39.046 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:39.046 "is_configured": true, 00:15:39.046 "data_offset": 2048, 00:15:39.046 "data_size": 63488 00:15:39.046 } 00:15:39.046 ] 00:15:39.046 }' 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.046 20:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.641 "name": "raid_bdev1", 00:15:39.641 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:39.641 "strip_size_kb": 0, 00:15:39.641 "state": "online", 00:15:39.641 "raid_level": "raid1", 00:15:39.641 "superblock": true, 00:15:39.641 "num_base_bdevs": 2, 00:15:39.641 "num_base_bdevs_discovered": 1, 00:15:39.641 "num_base_bdevs_operational": 1, 00:15:39.641 "base_bdevs_list": [ 00:15:39.641 { 00:15:39.641 "name": null, 00:15:39.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.641 "is_configured": false, 00:15:39.641 "data_offset": 0, 00:15:39.641 "data_size": 63488 00:15:39.641 }, 00:15:39.641 { 00:15:39.641 "name": "BaseBdev2", 00:15:39.641 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:39.641 "is_configured": true, 00:15:39.641 "data_offset": 2048, 00:15:39.641 "data_size": 63488 00:15:39.641 } 00:15:39.641 ] 00:15:39.641 }' 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.641 [2024-10-17 20:12:25.235264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:39.641 [2024-10-17 20:12:25.235393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.641 [2024-10-17 20:12:25.235438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:39.641 [2024-10-17 20:12:25.235457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.641 [2024-10-17 20:12:25.236033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.641 [2024-10-17 20:12:25.236104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:39.641 [2024-10-17 20:12:25.236201] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:39.641 [2024-10-17 20:12:25.236227] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:39.641 [2024-10-17 20:12:25.236238] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:39.641 [2024-10-17 20:12:25.236259] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:39.641 BaseBdev1 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.641 20:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.018 "name": "raid_bdev1", 00:15:41.018 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:41.018 "strip_size_kb": 0, 00:15:41.018 "state": "online", 00:15:41.018 "raid_level": "raid1", 00:15:41.018 "superblock": true, 00:15:41.018 "num_base_bdevs": 2, 00:15:41.018 "num_base_bdevs_discovered": 1, 00:15:41.018 "num_base_bdevs_operational": 1, 00:15:41.018 "base_bdevs_list": [ 00:15:41.018 { 00:15:41.018 "name": null, 00:15:41.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.018 "is_configured": false, 00:15:41.018 "data_offset": 0, 00:15:41.018 "data_size": 63488 00:15:41.018 }, 00:15:41.018 { 00:15:41.018 "name": "BaseBdev2", 00:15:41.018 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:41.018 "is_configured": true, 00:15:41.018 "data_offset": 2048, 00:15:41.018 "data_size": 63488 00:15:41.018 } 00:15:41.018 ] 00:15:41.018 }' 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.018 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.286 "name": "raid_bdev1", 00:15:41.286 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:41.286 "strip_size_kb": 0, 00:15:41.286 "state": "online", 00:15:41.286 "raid_level": "raid1", 00:15:41.286 "superblock": true, 00:15:41.286 "num_base_bdevs": 2, 00:15:41.286 "num_base_bdevs_discovered": 1, 00:15:41.286 "num_base_bdevs_operational": 1, 00:15:41.286 "base_bdevs_list": [ 00:15:41.286 { 00:15:41.286 "name": null, 00:15:41.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.286 "is_configured": false, 00:15:41.286 "data_offset": 0, 00:15:41.286 "data_size": 63488 00:15:41.286 }, 00:15:41.286 { 00:15:41.286 "name": "BaseBdev2", 00:15:41.286 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:41.286 "is_configured": true, 00:15:41.286 "data_offset": 2048, 00:15:41.286 "data_size": 63488 00:15:41.286 } 00:15:41.286 ] 00:15:41.286 }' 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.286 [2024-10-17 20:12:26.915997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.286 [2024-10-17 20:12:26.916265] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:41.286 [2024-10-17 20:12:26.916287] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:41.286 request: 00:15:41.286 { 00:15:41.286 "base_bdev": "BaseBdev1", 00:15:41.286 "raid_bdev": "raid_bdev1", 00:15:41.286 "method": "bdev_raid_add_base_bdev", 00:15:41.286 "req_id": 1 00:15:41.286 } 00:15:41.286 Got JSON-RPC error response 00:15:41.286 response: 00:15:41.286 { 00:15:41.286 "code": -22, 00:15:41.286 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:41.286 } 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:41.286 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:41.287 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:41.287 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:41.287 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:41.287 20:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.662 "name": "raid_bdev1", 00:15:42.662 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:42.662 "strip_size_kb": 0, 00:15:42.662 "state": "online", 00:15:42.662 "raid_level": "raid1", 00:15:42.662 "superblock": true, 00:15:42.662 "num_base_bdevs": 2, 00:15:42.662 "num_base_bdevs_discovered": 1, 00:15:42.662 "num_base_bdevs_operational": 1, 00:15:42.662 "base_bdevs_list": [ 00:15:42.662 { 00:15:42.662 "name": null, 00:15:42.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.662 "is_configured": false, 00:15:42.662 "data_offset": 0, 00:15:42.662 "data_size": 63488 00:15:42.662 }, 00:15:42.662 { 00:15:42.662 "name": "BaseBdev2", 00:15:42.662 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:42.662 "is_configured": true, 00:15:42.662 "data_offset": 2048, 00:15:42.662 "data_size": 63488 00:15:42.662 } 00:15:42.662 ] 00:15:42.662 }' 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.662 20:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.921 "name": "raid_bdev1", 00:15:42.921 "uuid": "7622f14d-f738-4a30-8da5-22421b6515f2", 00:15:42.921 "strip_size_kb": 0, 00:15:42.921 "state": "online", 00:15:42.921 "raid_level": "raid1", 00:15:42.921 "superblock": true, 00:15:42.921 "num_base_bdevs": 2, 00:15:42.921 "num_base_bdevs_discovered": 1, 00:15:42.921 "num_base_bdevs_operational": 1, 00:15:42.921 "base_bdevs_list": [ 00:15:42.921 { 00:15:42.921 "name": null, 00:15:42.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.921 "is_configured": false, 00:15:42.921 "data_offset": 0, 00:15:42.921 "data_size": 63488 00:15:42.921 }, 00:15:42.921 { 00:15:42.921 "name": "BaseBdev2", 00:15:42.921 "uuid": "ed7dfda1-81b2-5cdf-807a-1fe2ec0470ed", 00:15:42.921 "is_configured": true, 00:15:42.921 "data_offset": 2048, 00:15:42.921 "data_size": 63488 00:15:42.921 } 00:15:42.921 ] 00:15:42.921 }' 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.921 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.180 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.180 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76975 00:15:43.180 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 76975 ']' 00:15:43.180 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 76975 00:15:43.180 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:15:43.180 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:43.180 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76975 00:15:43.180 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:43.180 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:43.180 killing process with pid 76975 00:15:43.180 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76975' 00:15:43.180 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 76975 00:15:43.180 Received shutdown signal, test time was about 19.227292 seconds 00:15:43.180 00:15:43.180 Latency(us) 00:15:43.180 [2024-10-17T20:12:28.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.180 [2024-10-17T20:12:28.834Z] =================================================================================================================== 00:15:43.180 [2024-10-17T20:12:28.834Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:43.180 [2024-10-17 20:12:28.640438] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.180 20:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 76975 00:15:43.180 [2024-10-17 20:12:28.640588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.180 [2024-10-17 20:12:28.640667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.180 [2024-10-17 20:12:28.640683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:43.180 [2024-10-17 20:12:28.827274] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:44.556 00:15:44.556 real 0m22.468s 00:15:44.556 user 0m30.312s 00:15:44.556 sys 0m2.028s 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.556 ************************************ 00:15:44.556 END TEST raid_rebuild_test_sb_io 00:15:44.556 ************************************ 00:15:44.556 20:12:29 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:44.556 20:12:29 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:44.556 20:12:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:44.556 20:12:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:44.556 20:12:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.556 ************************************ 00:15:44.556 START TEST raid_rebuild_test 00:15:44.556 ************************************ 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.556 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77696 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77696 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 77696 ']' 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:44.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:44.557 20:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.557 [2024-10-17 20:12:29.998628] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:15:44.557 [2024-10-17 20:12:29.998867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77696 ] 00:15:44.557 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:44.557 Zero copy mechanism will not be used. 00:15:44.557 [2024-10-17 20:12:30.175839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.815 [2024-10-17 20:12:30.299215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.073 [2024-10-17 20:12:30.491274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.073 [2024-10-17 20:12:30.491355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.331 20:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:45.331 20:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:45.331 20:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.331 20:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:45.331 20:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.331 20:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.591 BaseBdev1_malloc 00:15:45.591 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.591 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:45.591 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.591 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.591 [2024-10-17 20:12:31.007563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:45.591 [2024-10-17 20:12:31.007664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.591 [2024-10-17 20:12:31.007698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:45.591 [2024-10-17 20:12:31.007718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.591 [2024-10-17 20:12:31.010537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.591 [2024-10-17 20:12:31.010604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:45.591 BaseBdev1 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.592 BaseBdev2_malloc 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.592 [2024-10-17 20:12:31.059074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:45.592 [2024-10-17 20:12:31.059149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.592 [2024-10-17 20:12:31.059178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:45.592 [2024-10-17 20:12:31.059197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.592 [2024-10-17 20:12:31.061888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.592 [2024-10-17 20:12:31.061953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:45.592 BaseBdev2 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.592 BaseBdev3_malloc 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.592 [2024-10-17 20:12:31.120935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:45.592 [2024-10-17 20:12:31.121049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.592 [2024-10-17 20:12:31.121084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:45.592 [2024-10-17 20:12:31.121103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.592 [2024-10-17 20:12:31.123826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.592 [2024-10-17 20:12:31.123886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:45.592 BaseBdev3 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.592 BaseBdev4_malloc 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.592 [2024-10-17 20:12:31.171939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:45.592 [2024-10-17 20:12:31.172043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.592 [2024-10-17 20:12:31.172084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:45.592 [2024-10-17 20:12:31.172106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.592 [2024-10-17 20:12:31.174821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.592 [2024-10-17 20:12:31.174903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:45.592 BaseBdev4 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.592 spare_malloc 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.592 spare_delay 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.592 [2024-10-17 20:12:31.230045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:45.592 [2024-10-17 20:12:31.230131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.592 [2024-10-17 20:12:31.230159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:45.592 [2024-10-17 20:12:31.230177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.592 [2024-10-17 20:12:31.232898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.592 [2024-10-17 20:12:31.232964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:45.592 spare 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.592 [2024-10-17 20:12:31.238092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.592 [2024-10-17 20:12:31.240589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.592 [2024-10-17 20:12:31.240699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:45.592 [2024-10-17 20:12:31.240776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:45.592 [2024-10-17 20:12:31.240933] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:45.592 [2024-10-17 20:12:31.240964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:45.592 [2024-10-17 20:12:31.241309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:45.592 [2024-10-17 20:12:31.241543] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:45.592 [2024-10-17 20:12:31.241573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:45.592 [2024-10-17 20:12:31.241761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.592 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.851 "name": "raid_bdev1", 00:15:45.851 "uuid": "670da708-ac44-419f-9c67-9e7c1650cf82", 00:15:45.851 "strip_size_kb": 0, 00:15:45.851 "state": "online", 00:15:45.851 "raid_level": "raid1", 00:15:45.851 "superblock": false, 00:15:45.851 "num_base_bdevs": 4, 00:15:45.851 "num_base_bdevs_discovered": 4, 00:15:45.851 "num_base_bdevs_operational": 4, 00:15:45.851 "base_bdevs_list": [ 00:15:45.851 { 00:15:45.851 "name": "BaseBdev1", 00:15:45.851 "uuid": "b1a23ad0-50f9-5f96-bc25-8899f7ce6740", 00:15:45.851 "is_configured": true, 00:15:45.851 "data_offset": 0, 00:15:45.851 "data_size": 65536 00:15:45.851 }, 00:15:45.851 { 00:15:45.851 "name": "BaseBdev2", 00:15:45.851 "uuid": "ec50ef38-a356-5179-be3d-7ee6b95b8172", 00:15:45.851 "is_configured": true, 00:15:45.851 "data_offset": 0, 00:15:45.851 "data_size": 65536 00:15:45.851 }, 00:15:45.851 { 00:15:45.851 "name": "BaseBdev3", 00:15:45.851 "uuid": "7ff43307-f7d2-5042-b769-bf93b67ff3e6", 00:15:45.851 "is_configured": true, 00:15:45.851 "data_offset": 0, 00:15:45.851 "data_size": 65536 00:15:45.851 }, 00:15:45.851 { 00:15:45.851 "name": "BaseBdev4", 00:15:45.851 "uuid": "d93ca306-05f4-585c-bc10-f67b94f77c6f", 00:15:45.851 "is_configured": true, 00:15:45.851 "data_offset": 0, 00:15:45.851 "data_size": 65536 00:15:45.851 } 00:15:45.851 ] 00:15:45.851 }' 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.851 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.110 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:46.110 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:46.110 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.110 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.110 [2024-10-17 20:12:31.746677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:46.368 20:12:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:46.627 [2024-10-17 20:12:32.126441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:46.627 /dev/nbd0 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.627 1+0 records in 00:15:46.627 1+0 records out 00:15:46.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027548 s, 14.9 MB/s 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:46.627 20:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:54.785 65536+0 records in 00:15:54.785 65536+0 records out 00:15:54.785 33554432 bytes (34 MB, 32 MiB) copied, 8.19718 s, 4.1 MB/s 00:15:54.785 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:54.785 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:54.785 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:54.785 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:54.785 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:54.785 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:54.785 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:55.127 [2024-10-17 20:12:40.670885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.127 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:55.127 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:55.127 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:55.127 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.127 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.127 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:55.127 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:55.127 20:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.127 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:55.127 20:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.127 20:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.128 [2024-10-17 20:12:40.704113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.128 20:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.416 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.416 "name": "raid_bdev1", 00:15:55.417 "uuid": "670da708-ac44-419f-9c67-9e7c1650cf82", 00:15:55.417 "strip_size_kb": 0, 00:15:55.417 "state": "online", 00:15:55.417 "raid_level": "raid1", 00:15:55.417 "superblock": false, 00:15:55.417 "num_base_bdevs": 4, 00:15:55.417 "num_base_bdevs_discovered": 3, 00:15:55.417 "num_base_bdevs_operational": 3, 00:15:55.417 "base_bdevs_list": [ 00:15:55.417 { 00:15:55.417 "name": null, 00:15:55.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.417 "is_configured": false, 00:15:55.417 "data_offset": 0, 00:15:55.417 "data_size": 65536 00:15:55.417 }, 00:15:55.417 { 00:15:55.417 "name": "BaseBdev2", 00:15:55.417 "uuid": "ec50ef38-a356-5179-be3d-7ee6b95b8172", 00:15:55.417 "is_configured": true, 00:15:55.417 "data_offset": 0, 00:15:55.417 "data_size": 65536 00:15:55.417 }, 00:15:55.417 { 00:15:55.417 "name": "BaseBdev3", 00:15:55.417 "uuid": "7ff43307-f7d2-5042-b769-bf93b67ff3e6", 00:15:55.417 "is_configured": true, 00:15:55.417 "data_offset": 0, 00:15:55.417 "data_size": 65536 00:15:55.417 }, 00:15:55.417 { 00:15:55.417 "name": "BaseBdev4", 00:15:55.417 "uuid": "d93ca306-05f4-585c-bc10-f67b94f77c6f", 00:15:55.417 "is_configured": true, 00:15:55.417 "data_offset": 0, 00:15:55.417 "data_size": 65536 00:15:55.417 } 00:15:55.417 ] 00:15:55.417 }' 00:15:55.417 20:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.417 20:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.676 20:12:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:55.676 20:12:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.676 20:12:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.676 [2024-10-17 20:12:41.200229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.676 [2024-10-17 20:12:41.214560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:55.676 20:12:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.676 20:12:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:55.676 [2024-10-17 20:12:41.217594] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:56.612 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.612 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.612 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.612 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.612 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.612 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.612 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.612 20:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.612 20:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.612 20:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.871 "name": "raid_bdev1", 00:15:56.871 "uuid": "670da708-ac44-419f-9c67-9e7c1650cf82", 00:15:56.871 "strip_size_kb": 0, 00:15:56.871 "state": "online", 00:15:56.871 "raid_level": "raid1", 00:15:56.871 "superblock": false, 00:15:56.871 "num_base_bdevs": 4, 00:15:56.871 "num_base_bdevs_discovered": 4, 00:15:56.871 "num_base_bdevs_operational": 4, 00:15:56.871 "process": { 00:15:56.871 "type": "rebuild", 00:15:56.871 "target": "spare", 00:15:56.871 "progress": { 00:15:56.871 "blocks": 20480, 00:15:56.871 "percent": 31 00:15:56.871 } 00:15:56.871 }, 00:15:56.871 "base_bdevs_list": [ 00:15:56.871 { 00:15:56.871 "name": "spare", 00:15:56.871 "uuid": "140133f7-1838-5a98-a73b-619337779487", 00:15:56.871 "is_configured": true, 00:15:56.871 "data_offset": 0, 00:15:56.871 "data_size": 65536 00:15:56.871 }, 00:15:56.871 { 00:15:56.871 "name": "BaseBdev2", 00:15:56.871 "uuid": "ec50ef38-a356-5179-be3d-7ee6b95b8172", 00:15:56.871 "is_configured": true, 00:15:56.871 "data_offset": 0, 00:15:56.871 "data_size": 65536 00:15:56.871 }, 00:15:56.871 { 00:15:56.871 "name": "BaseBdev3", 00:15:56.871 "uuid": "7ff43307-f7d2-5042-b769-bf93b67ff3e6", 00:15:56.871 "is_configured": true, 00:15:56.871 "data_offset": 0, 00:15:56.871 "data_size": 65536 00:15:56.871 }, 00:15:56.871 { 00:15:56.871 "name": "BaseBdev4", 00:15:56.871 "uuid": "d93ca306-05f4-585c-bc10-f67b94f77c6f", 00:15:56.871 "is_configured": true, 00:15:56.871 "data_offset": 0, 00:15:56.871 "data_size": 65536 00:15:56.871 } 00:15:56.871 ] 00:15:56.871 }' 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.871 [2024-10-17 20:12:42.390807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.871 [2024-10-17 20:12:42.426759] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:56.871 [2024-10-17 20:12:42.426865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.871 [2024-10-17 20:12:42.426891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.871 [2024-10-17 20:12:42.426905] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.871 "name": "raid_bdev1", 00:15:56.871 "uuid": "670da708-ac44-419f-9c67-9e7c1650cf82", 00:15:56.871 "strip_size_kb": 0, 00:15:56.871 "state": "online", 00:15:56.871 "raid_level": "raid1", 00:15:56.871 "superblock": false, 00:15:56.871 "num_base_bdevs": 4, 00:15:56.871 "num_base_bdevs_discovered": 3, 00:15:56.871 "num_base_bdevs_operational": 3, 00:15:56.871 "base_bdevs_list": [ 00:15:56.871 { 00:15:56.871 "name": null, 00:15:56.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.871 "is_configured": false, 00:15:56.871 "data_offset": 0, 00:15:56.871 "data_size": 65536 00:15:56.871 }, 00:15:56.871 { 00:15:56.871 "name": "BaseBdev2", 00:15:56.871 "uuid": "ec50ef38-a356-5179-be3d-7ee6b95b8172", 00:15:56.871 "is_configured": true, 00:15:56.871 "data_offset": 0, 00:15:56.871 "data_size": 65536 00:15:56.871 }, 00:15:56.871 { 00:15:56.871 "name": "BaseBdev3", 00:15:56.871 "uuid": "7ff43307-f7d2-5042-b769-bf93b67ff3e6", 00:15:56.871 "is_configured": true, 00:15:56.871 "data_offset": 0, 00:15:56.871 "data_size": 65536 00:15:56.871 }, 00:15:56.871 { 00:15:56.871 "name": "BaseBdev4", 00:15:56.871 "uuid": "d93ca306-05f4-585c-bc10-f67b94f77c6f", 00:15:56.871 "is_configured": true, 00:15:56.871 "data_offset": 0, 00:15:56.871 "data_size": 65536 00:15:56.871 } 00:15:56.871 ] 00:15:56.871 }' 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.871 20:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.438 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.438 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.438 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.438 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.438 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.438 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.438 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.438 20:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.438 20:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.438 20:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.438 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.438 "name": "raid_bdev1", 00:15:57.438 "uuid": "670da708-ac44-419f-9c67-9e7c1650cf82", 00:15:57.438 "strip_size_kb": 0, 00:15:57.438 "state": "online", 00:15:57.438 "raid_level": "raid1", 00:15:57.438 "superblock": false, 00:15:57.438 "num_base_bdevs": 4, 00:15:57.438 "num_base_bdevs_discovered": 3, 00:15:57.438 "num_base_bdevs_operational": 3, 00:15:57.438 "base_bdevs_list": [ 00:15:57.438 { 00:15:57.438 "name": null, 00:15:57.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.438 "is_configured": false, 00:15:57.438 "data_offset": 0, 00:15:57.438 "data_size": 65536 00:15:57.438 }, 00:15:57.438 { 00:15:57.438 "name": "BaseBdev2", 00:15:57.438 "uuid": "ec50ef38-a356-5179-be3d-7ee6b95b8172", 00:15:57.438 "is_configured": true, 00:15:57.438 "data_offset": 0, 00:15:57.438 "data_size": 65536 00:15:57.438 }, 00:15:57.438 { 00:15:57.438 "name": "BaseBdev3", 00:15:57.438 "uuid": "7ff43307-f7d2-5042-b769-bf93b67ff3e6", 00:15:57.438 "is_configured": true, 00:15:57.438 "data_offset": 0, 00:15:57.438 "data_size": 65536 00:15:57.438 }, 00:15:57.438 { 00:15:57.438 "name": "BaseBdev4", 00:15:57.438 "uuid": "d93ca306-05f4-585c-bc10-f67b94f77c6f", 00:15:57.438 "is_configured": true, 00:15:57.438 "data_offset": 0, 00:15:57.438 "data_size": 65536 00:15:57.438 } 00:15:57.438 ] 00:15:57.438 }' 00:15:57.438 20:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.438 20:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.438 20:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.438 20:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.438 20:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.438 20:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.438 20:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.697 [2024-10-17 20:12:43.094796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.697 [2024-10-17 20:12:43.108754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:57.697 20:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.697 20:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:57.697 [2024-10-17 20:12:43.111287] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.637 "name": "raid_bdev1", 00:15:58.637 "uuid": "670da708-ac44-419f-9c67-9e7c1650cf82", 00:15:58.637 "strip_size_kb": 0, 00:15:58.637 "state": "online", 00:15:58.637 "raid_level": "raid1", 00:15:58.637 "superblock": false, 00:15:58.637 "num_base_bdevs": 4, 00:15:58.637 "num_base_bdevs_discovered": 4, 00:15:58.637 "num_base_bdevs_operational": 4, 00:15:58.637 "process": { 00:15:58.637 "type": "rebuild", 00:15:58.637 "target": "spare", 00:15:58.637 "progress": { 00:15:58.637 "blocks": 20480, 00:15:58.637 "percent": 31 00:15:58.637 } 00:15:58.637 }, 00:15:58.637 "base_bdevs_list": [ 00:15:58.637 { 00:15:58.637 "name": "spare", 00:15:58.637 "uuid": "140133f7-1838-5a98-a73b-619337779487", 00:15:58.637 "is_configured": true, 00:15:58.637 "data_offset": 0, 00:15:58.637 "data_size": 65536 00:15:58.637 }, 00:15:58.637 { 00:15:58.637 "name": "BaseBdev2", 00:15:58.637 "uuid": "ec50ef38-a356-5179-be3d-7ee6b95b8172", 00:15:58.637 "is_configured": true, 00:15:58.637 "data_offset": 0, 00:15:58.637 "data_size": 65536 00:15:58.637 }, 00:15:58.637 { 00:15:58.637 "name": "BaseBdev3", 00:15:58.637 "uuid": "7ff43307-f7d2-5042-b769-bf93b67ff3e6", 00:15:58.637 "is_configured": true, 00:15:58.637 "data_offset": 0, 00:15:58.637 "data_size": 65536 00:15:58.637 }, 00:15:58.637 { 00:15:58.637 "name": "BaseBdev4", 00:15:58.637 "uuid": "d93ca306-05f4-585c-bc10-f67b94f77c6f", 00:15:58.637 "is_configured": true, 00:15:58.637 "data_offset": 0, 00:15:58.637 "data_size": 65536 00:15:58.637 } 00:15:58.637 ] 00:15:58.637 }' 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.637 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.638 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.638 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:58.638 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:58.638 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:58.638 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:58.638 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:58.638 20:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.638 20:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.638 [2024-10-17 20:12:44.284552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:58.897 [2024-10-17 20:12:44.320458] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.897 "name": "raid_bdev1", 00:15:58.897 "uuid": "670da708-ac44-419f-9c67-9e7c1650cf82", 00:15:58.897 "strip_size_kb": 0, 00:15:58.897 "state": "online", 00:15:58.897 "raid_level": "raid1", 00:15:58.897 "superblock": false, 00:15:58.897 "num_base_bdevs": 4, 00:15:58.897 "num_base_bdevs_discovered": 3, 00:15:58.897 "num_base_bdevs_operational": 3, 00:15:58.897 "process": { 00:15:58.897 "type": "rebuild", 00:15:58.897 "target": "spare", 00:15:58.897 "progress": { 00:15:58.897 "blocks": 24576, 00:15:58.897 "percent": 37 00:15:58.897 } 00:15:58.897 }, 00:15:58.897 "base_bdevs_list": [ 00:15:58.897 { 00:15:58.897 "name": "spare", 00:15:58.897 "uuid": "140133f7-1838-5a98-a73b-619337779487", 00:15:58.897 "is_configured": true, 00:15:58.897 "data_offset": 0, 00:15:58.897 "data_size": 65536 00:15:58.897 }, 00:15:58.897 { 00:15:58.897 "name": null, 00:15:58.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.897 "is_configured": false, 00:15:58.897 "data_offset": 0, 00:15:58.897 "data_size": 65536 00:15:58.897 }, 00:15:58.897 { 00:15:58.897 "name": "BaseBdev3", 00:15:58.897 "uuid": "7ff43307-f7d2-5042-b769-bf93b67ff3e6", 00:15:58.897 "is_configured": true, 00:15:58.897 "data_offset": 0, 00:15:58.897 "data_size": 65536 00:15:58.897 }, 00:15:58.897 { 00:15:58.897 "name": "BaseBdev4", 00:15:58.897 "uuid": "d93ca306-05f4-585c-bc10-f67b94f77c6f", 00:15:58.897 "is_configured": true, 00:15:58.897 "data_offset": 0, 00:15:58.897 "data_size": 65536 00:15:58.897 } 00:15:58.897 ] 00:15:58.897 }' 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=479 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.897 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.897 "name": "raid_bdev1", 00:15:58.897 "uuid": "670da708-ac44-419f-9c67-9e7c1650cf82", 00:15:58.897 "strip_size_kb": 0, 00:15:58.897 "state": "online", 00:15:58.897 "raid_level": "raid1", 00:15:58.897 "superblock": false, 00:15:58.897 "num_base_bdevs": 4, 00:15:58.897 "num_base_bdevs_discovered": 3, 00:15:58.897 "num_base_bdevs_operational": 3, 00:15:58.897 "process": { 00:15:58.897 "type": "rebuild", 00:15:58.897 "target": "spare", 00:15:58.897 "progress": { 00:15:58.897 "blocks": 26624, 00:15:58.897 "percent": 40 00:15:58.897 } 00:15:58.897 }, 00:15:58.897 "base_bdevs_list": [ 00:15:58.897 { 00:15:58.897 "name": "spare", 00:15:58.897 "uuid": "140133f7-1838-5a98-a73b-619337779487", 00:15:58.897 "is_configured": true, 00:15:58.897 "data_offset": 0, 00:15:58.897 "data_size": 65536 00:15:58.897 }, 00:15:58.898 { 00:15:58.898 "name": null, 00:15:58.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.898 "is_configured": false, 00:15:58.898 "data_offset": 0, 00:15:58.898 "data_size": 65536 00:15:58.898 }, 00:15:58.898 { 00:15:58.898 "name": "BaseBdev3", 00:15:58.898 "uuid": "7ff43307-f7d2-5042-b769-bf93b67ff3e6", 00:15:58.898 "is_configured": true, 00:15:58.898 "data_offset": 0, 00:15:58.898 "data_size": 65536 00:15:58.898 }, 00:15:58.898 { 00:15:58.898 "name": "BaseBdev4", 00:15:58.898 "uuid": "d93ca306-05f4-585c-bc10-f67b94f77c6f", 00:15:58.898 "is_configured": true, 00:15:58.898 "data_offset": 0, 00:15:58.898 "data_size": 65536 00:15:58.898 } 00:15:58.898 ] 00:15:58.898 }' 00:15:58.898 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.156 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.156 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.156 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.156 20:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.091 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.091 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.091 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.091 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.091 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.091 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.091 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.091 20:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.091 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.091 20:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.091 20:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.091 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.091 "name": "raid_bdev1", 00:16:00.091 "uuid": "670da708-ac44-419f-9c67-9e7c1650cf82", 00:16:00.091 "strip_size_kb": 0, 00:16:00.091 "state": "online", 00:16:00.091 "raid_level": "raid1", 00:16:00.091 "superblock": false, 00:16:00.091 "num_base_bdevs": 4, 00:16:00.091 "num_base_bdevs_discovered": 3, 00:16:00.091 "num_base_bdevs_operational": 3, 00:16:00.091 "process": { 00:16:00.091 "type": "rebuild", 00:16:00.091 "target": "spare", 00:16:00.091 "progress": { 00:16:00.091 "blocks": 51200, 00:16:00.091 "percent": 78 00:16:00.091 } 00:16:00.091 }, 00:16:00.091 "base_bdevs_list": [ 00:16:00.091 { 00:16:00.091 "name": "spare", 00:16:00.091 "uuid": "140133f7-1838-5a98-a73b-619337779487", 00:16:00.091 "is_configured": true, 00:16:00.091 "data_offset": 0, 00:16:00.091 "data_size": 65536 00:16:00.091 }, 00:16:00.091 { 00:16:00.091 "name": null, 00:16:00.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.091 "is_configured": false, 00:16:00.091 "data_offset": 0, 00:16:00.091 "data_size": 65536 00:16:00.091 }, 00:16:00.091 { 00:16:00.091 "name": "BaseBdev3", 00:16:00.091 "uuid": "7ff43307-f7d2-5042-b769-bf93b67ff3e6", 00:16:00.091 "is_configured": true, 00:16:00.091 "data_offset": 0, 00:16:00.091 "data_size": 65536 00:16:00.091 }, 00:16:00.091 { 00:16:00.091 "name": "BaseBdev4", 00:16:00.091 "uuid": "d93ca306-05f4-585c-bc10-f67b94f77c6f", 00:16:00.091 "is_configured": true, 00:16:00.091 "data_offset": 0, 00:16:00.091 "data_size": 65536 00:16:00.091 } 00:16:00.091 ] 00:16:00.091 }' 00:16:00.091 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.350 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.350 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.350 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.350 20:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.917 [2024-10-17 20:12:46.335045] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:00.917 [2024-10-17 20:12:46.335422] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:00.917 [2024-10-17 20:12:46.335610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.176 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.176 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.176 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.176 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.176 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.176 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.176 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.176 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.176 20:12:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.176 20:12:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.176 20:12:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.435 "name": "raid_bdev1", 00:16:01.435 "uuid": "670da708-ac44-419f-9c67-9e7c1650cf82", 00:16:01.435 "strip_size_kb": 0, 00:16:01.435 "state": "online", 00:16:01.435 "raid_level": "raid1", 00:16:01.435 "superblock": false, 00:16:01.435 "num_base_bdevs": 4, 00:16:01.435 "num_base_bdevs_discovered": 3, 00:16:01.435 "num_base_bdevs_operational": 3, 00:16:01.435 "base_bdevs_list": [ 00:16:01.435 { 00:16:01.435 "name": "spare", 00:16:01.435 "uuid": "140133f7-1838-5a98-a73b-619337779487", 00:16:01.435 "is_configured": true, 00:16:01.435 "data_offset": 0, 00:16:01.435 "data_size": 65536 00:16:01.435 }, 00:16:01.435 { 00:16:01.435 "name": null, 00:16:01.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.435 "is_configured": false, 00:16:01.435 "data_offset": 0, 00:16:01.435 "data_size": 65536 00:16:01.435 }, 00:16:01.435 { 00:16:01.435 "name": "BaseBdev3", 00:16:01.435 "uuid": "7ff43307-f7d2-5042-b769-bf93b67ff3e6", 00:16:01.435 "is_configured": true, 00:16:01.435 "data_offset": 0, 00:16:01.435 "data_size": 65536 00:16:01.435 }, 00:16:01.435 { 00:16:01.435 "name": "BaseBdev4", 00:16:01.435 "uuid": "d93ca306-05f4-585c-bc10-f67b94f77c6f", 00:16:01.435 "is_configured": true, 00:16:01.435 "data_offset": 0, 00:16:01.435 "data_size": 65536 00:16:01.435 } 00:16:01.435 ] 00:16:01.435 }' 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.435 20:12:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.435 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.435 "name": "raid_bdev1", 00:16:01.435 "uuid": "670da708-ac44-419f-9c67-9e7c1650cf82", 00:16:01.435 "strip_size_kb": 0, 00:16:01.435 "state": "online", 00:16:01.435 "raid_level": "raid1", 00:16:01.435 "superblock": false, 00:16:01.435 "num_base_bdevs": 4, 00:16:01.435 "num_base_bdevs_discovered": 3, 00:16:01.435 "num_base_bdevs_operational": 3, 00:16:01.435 "base_bdevs_list": [ 00:16:01.435 { 00:16:01.435 "name": "spare", 00:16:01.435 "uuid": "140133f7-1838-5a98-a73b-619337779487", 00:16:01.435 "is_configured": true, 00:16:01.435 "data_offset": 0, 00:16:01.435 "data_size": 65536 00:16:01.435 }, 00:16:01.435 { 00:16:01.435 "name": null, 00:16:01.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.435 "is_configured": false, 00:16:01.435 "data_offset": 0, 00:16:01.435 "data_size": 65536 00:16:01.435 }, 00:16:01.435 { 00:16:01.435 "name": "BaseBdev3", 00:16:01.435 "uuid": "7ff43307-f7d2-5042-b769-bf93b67ff3e6", 00:16:01.435 "is_configured": true, 00:16:01.435 "data_offset": 0, 00:16:01.435 "data_size": 65536 00:16:01.435 }, 00:16:01.435 { 00:16:01.435 "name": "BaseBdev4", 00:16:01.435 "uuid": "d93ca306-05f4-585c-bc10-f67b94f77c6f", 00:16:01.435 "is_configured": true, 00:16:01.435 "data_offset": 0, 00:16:01.435 "data_size": 65536 00:16:01.435 } 00:16:01.435 ] 00:16:01.435 }' 00:16:01.435 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.694 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.694 "name": "raid_bdev1", 00:16:01.694 "uuid": "670da708-ac44-419f-9c67-9e7c1650cf82", 00:16:01.694 "strip_size_kb": 0, 00:16:01.694 "state": "online", 00:16:01.694 "raid_level": "raid1", 00:16:01.694 "superblock": false, 00:16:01.694 "num_base_bdevs": 4, 00:16:01.694 "num_base_bdevs_discovered": 3, 00:16:01.694 "num_base_bdevs_operational": 3, 00:16:01.694 "base_bdevs_list": [ 00:16:01.694 { 00:16:01.694 "name": "spare", 00:16:01.695 "uuid": "140133f7-1838-5a98-a73b-619337779487", 00:16:01.695 "is_configured": true, 00:16:01.695 "data_offset": 0, 00:16:01.695 "data_size": 65536 00:16:01.695 }, 00:16:01.695 { 00:16:01.695 "name": null, 00:16:01.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.695 "is_configured": false, 00:16:01.695 "data_offset": 0, 00:16:01.695 "data_size": 65536 00:16:01.695 }, 00:16:01.695 { 00:16:01.695 "name": "BaseBdev3", 00:16:01.695 "uuid": "7ff43307-f7d2-5042-b769-bf93b67ff3e6", 00:16:01.695 "is_configured": true, 00:16:01.695 "data_offset": 0, 00:16:01.695 "data_size": 65536 00:16:01.695 }, 00:16:01.695 { 00:16:01.695 "name": "BaseBdev4", 00:16:01.695 "uuid": "d93ca306-05f4-585c-bc10-f67b94f77c6f", 00:16:01.695 "is_configured": true, 00:16:01.695 "data_offset": 0, 00:16:01.695 "data_size": 65536 00:16:01.695 } 00:16:01.695 ] 00:16:01.695 }' 00:16:01.695 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.695 20:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.271 [2024-10-17 20:12:47.691901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.271 [2024-10-17 20:12:47.691942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.271 [2024-10-17 20:12:47.692043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.271 [2024-10-17 20:12:47.692335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.271 [2024-10-17 20:12:47.692416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.271 20:12:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:02.540 /dev/nbd0 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.540 1+0 records in 00:16:02.540 1+0 records out 00:16:02.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689637 s, 5.9 MB/s 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.540 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:02.799 /dev/nbd1 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.799 1+0 records in 00:16:02.799 1+0 records out 00:16:02.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045443 s, 9.0 MB/s 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.799 20:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:03.057 20:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:03.057 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.057 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:03.057 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:03.057 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:03.057 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.057 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:03.316 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:03.316 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:03.316 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:03.316 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.316 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.316 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:03.316 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:03.316 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.316 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.316 20:12:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77696 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 77696 ']' 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 77696 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77696 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:03.884 killing process with pid 77696 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77696' 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 77696 00:16:03.884 Received shutdown signal, test time was about 60.000000 seconds 00:16:03.884 00:16:03.884 Latency(us) 00:16:03.884 [2024-10-17T20:12:49.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.884 [2024-10-17T20:12:49.538Z] =================================================================================================================== 00:16:03.884 [2024-10-17T20:12:49.538Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:03.884 [2024-10-17 20:12:49.326838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:03.884 20:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 77696 00:16:04.143 [2024-10-17 20:12:49.737743] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.079 20:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:05.079 00:16:05.079 real 0m20.842s 00:16:05.079 user 0m23.503s 00:16:05.079 sys 0m3.499s 00:16:05.079 20:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:05.079 ************************************ 00:16:05.079 END TEST raid_rebuild_test 00:16:05.079 ************************************ 00:16:05.079 20:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.337 20:12:50 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:16:05.337 20:12:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:05.337 20:12:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.338 20:12:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:05.338 ************************************ 00:16:05.338 START TEST raid_rebuild_test_sb 00:16:05.338 ************************************ 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78170 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78170 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78170 ']' 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.338 20:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.338 [2024-10-17 20:12:50.901354] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:16:05.338 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:05.338 Zero copy mechanism will not be used. 00:16:05.338 [2024-10-17 20:12:50.901607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78170 ] 00:16:05.596 [2024-10-17 20:12:51.073451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.596 [2024-10-17 20:12:51.189520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.855 [2024-10-17 20:12:51.390369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.855 [2024-10-17 20:12:51.390414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.423 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.423 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:06.423 20:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:06.423 20:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:06.423 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.423 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.423 BaseBdev1_malloc 00:16:06.423 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.423 20:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:06.423 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.423 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.423 [2024-10-17 20:12:51.878860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:06.423 [2024-10-17 20:12:51.878971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.423 [2024-10-17 20:12:51.879004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:06.423 [2024-10-17 20:12:51.879037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.423 [2024-10-17 20:12:51.881877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.423 [2024-10-17 20:12:51.881957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:06.423 BaseBdev1 00:16:06.423 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.423 20:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.424 BaseBdev2_malloc 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.424 [2024-10-17 20:12:51.930416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:06.424 [2024-10-17 20:12:51.930534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.424 [2024-10-17 20:12:51.930560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:06.424 [2024-10-17 20:12:51.930577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.424 [2024-10-17 20:12:51.933518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.424 [2024-10-17 20:12:51.933593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:06.424 BaseBdev2 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.424 BaseBdev3_malloc 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.424 [2024-10-17 20:12:51.991718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:06.424 [2024-10-17 20:12:51.991813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.424 [2024-10-17 20:12:51.991843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:06.424 [2024-10-17 20:12:51.991860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.424 [2024-10-17 20:12:51.994651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.424 [2024-10-17 20:12:51.994728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:06.424 BaseBdev3 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.424 20:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.424 BaseBdev4_malloc 00:16:06.424 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.424 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:06.424 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.424 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.424 [2024-10-17 20:12:52.042101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:06.424 [2024-10-17 20:12:52.042210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.424 [2024-10-17 20:12:52.042246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:06.424 [2024-10-17 20:12:52.042264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.424 [2024-10-17 20:12:52.045081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.424 [2024-10-17 20:12:52.045160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:06.424 BaseBdev4 00:16:06.424 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.424 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:06.424 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.424 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.683 spare_malloc 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.683 spare_delay 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.683 [2024-10-17 20:12:52.100791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.683 [2024-10-17 20:12:52.100878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.683 [2024-10-17 20:12:52.100907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:06.683 [2024-10-17 20:12:52.100923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.683 [2024-10-17 20:12:52.103844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.683 [2024-10-17 20:12:52.103891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.683 spare 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.683 [2024-10-17 20:12:52.108933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.683 [2024-10-17 20:12:52.111426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.683 [2024-10-17 20:12:52.111541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.683 [2024-10-17 20:12:52.111650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.683 [2024-10-17 20:12:52.111904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:06.683 [2024-10-17 20:12:52.111928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:06.683 [2024-10-17 20:12:52.112341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:06.683 [2024-10-17 20:12:52.112599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:06.683 [2024-10-17 20:12:52.112617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:06.683 [2024-10-17 20:12:52.112852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.683 "name": "raid_bdev1", 00:16:06.683 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:06.683 "strip_size_kb": 0, 00:16:06.683 "state": "online", 00:16:06.683 "raid_level": "raid1", 00:16:06.683 "superblock": true, 00:16:06.683 "num_base_bdevs": 4, 00:16:06.683 "num_base_bdevs_discovered": 4, 00:16:06.683 "num_base_bdevs_operational": 4, 00:16:06.683 "base_bdevs_list": [ 00:16:06.683 { 00:16:06.683 "name": "BaseBdev1", 00:16:06.683 "uuid": "9e85e4c3-0ef5-50b5-b810-666257880432", 00:16:06.683 "is_configured": true, 00:16:06.683 "data_offset": 2048, 00:16:06.683 "data_size": 63488 00:16:06.683 }, 00:16:06.683 { 00:16:06.683 "name": "BaseBdev2", 00:16:06.683 "uuid": "ef9249ab-ca1d-53db-97c0-65eff44f3604", 00:16:06.683 "is_configured": true, 00:16:06.683 "data_offset": 2048, 00:16:06.683 "data_size": 63488 00:16:06.683 }, 00:16:06.683 { 00:16:06.683 "name": "BaseBdev3", 00:16:06.683 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:06.683 "is_configured": true, 00:16:06.683 "data_offset": 2048, 00:16:06.683 "data_size": 63488 00:16:06.683 }, 00:16:06.683 { 00:16:06.683 "name": "BaseBdev4", 00:16:06.683 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:06.683 "is_configured": true, 00:16:06.683 "data_offset": 2048, 00:16:06.683 "data_size": 63488 00:16:06.683 } 00:16:06.683 ] 00:16:06.683 }' 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.683 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.250 [2024-10-17 20:12:52.661541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.250 20:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:07.509 [2024-10-17 20:12:53.057302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:07.509 /dev/nbd0 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.509 1+0 records in 00:16:07.509 1+0 records out 00:16:07.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313398 s, 13.1 MB/s 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:07.509 20:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:16:15.630 63488+0 records in 00:16:15.630 63488+0 records out 00:16:15.630 32505856 bytes (33 MB, 31 MiB) copied, 7.97187 s, 4.1 MB/s 00:16:15.630 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:15.631 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.631 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:15.631 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:15.631 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:15.631 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.631 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:15.890 [2024-10-17 20:13:01.366952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.890 [2024-10-17 20:13:01.399064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.890 "name": "raid_bdev1", 00:16:15.890 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:15.890 "strip_size_kb": 0, 00:16:15.890 "state": "online", 00:16:15.890 "raid_level": "raid1", 00:16:15.890 "superblock": true, 00:16:15.890 "num_base_bdevs": 4, 00:16:15.890 "num_base_bdevs_discovered": 3, 00:16:15.890 "num_base_bdevs_operational": 3, 00:16:15.890 "base_bdevs_list": [ 00:16:15.890 { 00:16:15.890 "name": null, 00:16:15.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.890 "is_configured": false, 00:16:15.890 "data_offset": 0, 00:16:15.890 "data_size": 63488 00:16:15.890 }, 00:16:15.890 { 00:16:15.890 "name": "BaseBdev2", 00:16:15.890 "uuid": "ef9249ab-ca1d-53db-97c0-65eff44f3604", 00:16:15.890 "is_configured": true, 00:16:15.890 "data_offset": 2048, 00:16:15.890 "data_size": 63488 00:16:15.890 }, 00:16:15.890 { 00:16:15.890 "name": "BaseBdev3", 00:16:15.890 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:15.890 "is_configured": true, 00:16:15.890 "data_offset": 2048, 00:16:15.890 "data_size": 63488 00:16:15.890 }, 00:16:15.890 { 00:16:15.890 "name": "BaseBdev4", 00:16:15.890 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:15.890 "is_configured": true, 00:16:15.890 "data_offset": 2048, 00:16:15.890 "data_size": 63488 00:16:15.890 } 00:16:15.890 ] 00:16:15.890 }' 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.890 20:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.458 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:16.458 20:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.458 20:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.458 [2024-10-17 20:13:01.951377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:16.458 [2024-10-17 20:13:01.965778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:16:16.458 20:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.458 20:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:16.458 [2024-10-17 20:13:01.968604] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:17.457 20:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.457 20:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.457 20:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.457 20:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.457 20:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.457 20:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.457 20:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.458 20:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.458 20:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.458 20:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.458 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.458 "name": "raid_bdev1", 00:16:17.458 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:17.458 "strip_size_kb": 0, 00:16:17.458 "state": "online", 00:16:17.458 "raid_level": "raid1", 00:16:17.458 "superblock": true, 00:16:17.458 "num_base_bdevs": 4, 00:16:17.458 "num_base_bdevs_discovered": 4, 00:16:17.458 "num_base_bdevs_operational": 4, 00:16:17.458 "process": { 00:16:17.458 "type": "rebuild", 00:16:17.458 "target": "spare", 00:16:17.458 "progress": { 00:16:17.458 "blocks": 20480, 00:16:17.458 "percent": 32 00:16:17.458 } 00:16:17.458 }, 00:16:17.458 "base_bdevs_list": [ 00:16:17.458 { 00:16:17.458 "name": "spare", 00:16:17.458 "uuid": "cf3f758f-2496-5d0f-a19e-60f9b3cf7d83", 00:16:17.458 "is_configured": true, 00:16:17.458 "data_offset": 2048, 00:16:17.458 "data_size": 63488 00:16:17.458 }, 00:16:17.458 { 00:16:17.458 "name": "BaseBdev2", 00:16:17.458 "uuid": "ef9249ab-ca1d-53db-97c0-65eff44f3604", 00:16:17.458 "is_configured": true, 00:16:17.458 "data_offset": 2048, 00:16:17.458 "data_size": 63488 00:16:17.458 }, 00:16:17.458 { 00:16:17.458 "name": "BaseBdev3", 00:16:17.458 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:17.458 "is_configured": true, 00:16:17.458 "data_offset": 2048, 00:16:17.458 "data_size": 63488 00:16:17.458 }, 00:16:17.458 { 00:16:17.458 "name": "BaseBdev4", 00:16:17.458 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:17.458 "is_configured": true, 00:16:17.458 "data_offset": 2048, 00:16:17.458 "data_size": 63488 00:16:17.458 } 00:16:17.458 ] 00:16:17.458 }' 00:16:17.458 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.458 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.458 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.716 [2024-10-17 20:13:03.145750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.716 [2024-10-17 20:13:03.177573] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:17.716 [2024-10-17 20:13:03.177689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.716 [2024-10-17 20:13:03.177716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.716 [2024-10-17 20:13:03.177731] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.716 "name": "raid_bdev1", 00:16:17.716 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:17.716 "strip_size_kb": 0, 00:16:17.716 "state": "online", 00:16:17.716 "raid_level": "raid1", 00:16:17.716 "superblock": true, 00:16:17.716 "num_base_bdevs": 4, 00:16:17.716 "num_base_bdevs_discovered": 3, 00:16:17.716 "num_base_bdevs_operational": 3, 00:16:17.716 "base_bdevs_list": [ 00:16:17.716 { 00:16:17.716 "name": null, 00:16:17.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.716 "is_configured": false, 00:16:17.716 "data_offset": 0, 00:16:17.716 "data_size": 63488 00:16:17.716 }, 00:16:17.716 { 00:16:17.716 "name": "BaseBdev2", 00:16:17.716 "uuid": "ef9249ab-ca1d-53db-97c0-65eff44f3604", 00:16:17.716 "is_configured": true, 00:16:17.716 "data_offset": 2048, 00:16:17.716 "data_size": 63488 00:16:17.716 }, 00:16:17.716 { 00:16:17.716 "name": "BaseBdev3", 00:16:17.716 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:17.716 "is_configured": true, 00:16:17.716 "data_offset": 2048, 00:16:17.716 "data_size": 63488 00:16:17.716 }, 00:16:17.716 { 00:16:17.716 "name": "BaseBdev4", 00:16:17.716 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:17.716 "is_configured": true, 00:16:17.716 "data_offset": 2048, 00:16:17.716 "data_size": 63488 00:16:17.716 } 00:16:17.716 ] 00:16:17.716 }' 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.716 20:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.284 "name": "raid_bdev1", 00:16:18.284 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:18.284 "strip_size_kb": 0, 00:16:18.284 "state": "online", 00:16:18.284 "raid_level": "raid1", 00:16:18.284 "superblock": true, 00:16:18.284 "num_base_bdevs": 4, 00:16:18.284 "num_base_bdevs_discovered": 3, 00:16:18.284 "num_base_bdevs_operational": 3, 00:16:18.284 "base_bdevs_list": [ 00:16:18.284 { 00:16:18.284 "name": null, 00:16:18.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.284 "is_configured": false, 00:16:18.284 "data_offset": 0, 00:16:18.284 "data_size": 63488 00:16:18.284 }, 00:16:18.284 { 00:16:18.284 "name": "BaseBdev2", 00:16:18.284 "uuid": "ef9249ab-ca1d-53db-97c0-65eff44f3604", 00:16:18.284 "is_configured": true, 00:16:18.284 "data_offset": 2048, 00:16:18.284 "data_size": 63488 00:16:18.284 }, 00:16:18.284 { 00:16:18.284 "name": "BaseBdev3", 00:16:18.284 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:18.284 "is_configured": true, 00:16:18.284 "data_offset": 2048, 00:16:18.284 "data_size": 63488 00:16:18.284 }, 00:16:18.284 { 00:16:18.284 "name": "BaseBdev4", 00:16:18.284 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:18.284 "is_configured": true, 00:16:18.284 "data_offset": 2048, 00:16:18.284 "data_size": 63488 00:16:18.284 } 00:16:18.284 ] 00:16:18.284 }' 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.284 [2024-10-17 20:13:03.885946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.284 [2024-10-17 20:13:03.899745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.284 20:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:18.284 [2024-10-17 20:13:03.902296] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:19.660 20:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.660 20:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.660 20:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.660 20:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.660 20:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.660 20:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.660 20:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.660 20:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.660 20:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.660 20:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.660 20:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.660 "name": "raid_bdev1", 00:16:19.660 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:19.660 "strip_size_kb": 0, 00:16:19.660 "state": "online", 00:16:19.660 "raid_level": "raid1", 00:16:19.660 "superblock": true, 00:16:19.660 "num_base_bdevs": 4, 00:16:19.660 "num_base_bdevs_discovered": 4, 00:16:19.660 "num_base_bdevs_operational": 4, 00:16:19.660 "process": { 00:16:19.660 "type": "rebuild", 00:16:19.660 "target": "spare", 00:16:19.660 "progress": { 00:16:19.660 "blocks": 20480, 00:16:19.660 "percent": 32 00:16:19.660 } 00:16:19.660 }, 00:16:19.660 "base_bdevs_list": [ 00:16:19.660 { 00:16:19.660 "name": "spare", 00:16:19.660 "uuid": "cf3f758f-2496-5d0f-a19e-60f9b3cf7d83", 00:16:19.660 "is_configured": true, 00:16:19.660 "data_offset": 2048, 00:16:19.660 "data_size": 63488 00:16:19.660 }, 00:16:19.660 { 00:16:19.660 "name": "BaseBdev2", 00:16:19.660 "uuid": "ef9249ab-ca1d-53db-97c0-65eff44f3604", 00:16:19.660 "is_configured": true, 00:16:19.660 "data_offset": 2048, 00:16:19.660 "data_size": 63488 00:16:19.660 }, 00:16:19.660 { 00:16:19.660 "name": "BaseBdev3", 00:16:19.660 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:19.660 "is_configured": true, 00:16:19.660 "data_offset": 2048, 00:16:19.660 "data_size": 63488 00:16:19.660 }, 00:16:19.660 { 00:16:19.660 "name": "BaseBdev4", 00:16:19.660 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:19.660 "is_configured": true, 00:16:19.660 "data_offset": 2048, 00:16:19.660 "data_size": 63488 00:16:19.660 } 00:16:19.660 ] 00:16:19.660 }' 00:16:19.660 20:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:19.660 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.660 [2024-10-17 20:13:05.067314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:19.660 [2024-10-17 20:13:05.211404] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.660 "name": "raid_bdev1", 00:16:19.660 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:19.660 "strip_size_kb": 0, 00:16:19.660 "state": "online", 00:16:19.660 "raid_level": "raid1", 00:16:19.660 "superblock": true, 00:16:19.660 "num_base_bdevs": 4, 00:16:19.660 "num_base_bdevs_discovered": 3, 00:16:19.660 "num_base_bdevs_operational": 3, 00:16:19.660 "process": { 00:16:19.660 "type": "rebuild", 00:16:19.660 "target": "spare", 00:16:19.660 "progress": { 00:16:19.660 "blocks": 24576, 00:16:19.660 "percent": 38 00:16:19.660 } 00:16:19.660 }, 00:16:19.660 "base_bdevs_list": [ 00:16:19.660 { 00:16:19.660 "name": "spare", 00:16:19.660 "uuid": "cf3f758f-2496-5d0f-a19e-60f9b3cf7d83", 00:16:19.660 "is_configured": true, 00:16:19.660 "data_offset": 2048, 00:16:19.660 "data_size": 63488 00:16:19.660 }, 00:16:19.660 { 00:16:19.660 "name": null, 00:16:19.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.660 "is_configured": false, 00:16:19.660 "data_offset": 0, 00:16:19.660 "data_size": 63488 00:16:19.660 }, 00:16:19.660 { 00:16:19.660 "name": "BaseBdev3", 00:16:19.660 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:19.660 "is_configured": true, 00:16:19.660 "data_offset": 2048, 00:16:19.660 "data_size": 63488 00:16:19.660 }, 00:16:19.660 { 00:16:19.660 "name": "BaseBdev4", 00:16:19.660 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:19.660 "is_configured": true, 00:16:19.660 "data_offset": 2048, 00:16:19.660 "data_size": 63488 00:16:19.660 } 00:16:19.660 ] 00:16:19.660 }' 00:16:19.660 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=500 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.919 "name": "raid_bdev1", 00:16:19.919 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:19.919 "strip_size_kb": 0, 00:16:19.919 "state": "online", 00:16:19.919 "raid_level": "raid1", 00:16:19.919 "superblock": true, 00:16:19.919 "num_base_bdevs": 4, 00:16:19.919 "num_base_bdevs_discovered": 3, 00:16:19.919 "num_base_bdevs_operational": 3, 00:16:19.919 "process": { 00:16:19.919 "type": "rebuild", 00:16:19.919 "target": "spare", 00:16:19.919 "progress": { 00:16:19.919 "blocks": 26624, 00:16:19.919 "percent": 41 00:16:19.919 } 00:16:19.919 }, 00:16:19.919 "base_bdevs_list": [ 00:16:19.919 { 00:16:19.919 "name": "spare", 00:16:19.919 "uuid": "cf3f758f-2496-5d0f-a19e-60f9b3cf7d83", 00:16:19.919 "is_configured": true, 00:16:19.919 "data_offset": 2048, 00:16:19.919 "data_size": 63488 00:16:19.919 }, 00:16:19.919 { 00:16:19.919 "name": null, 00:16:19.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.919 "is_configured": false, 00:16:19.919 "data_offset": 0, 00:16:19.919 "data_size": 63488 00:16:19.919 }, 00:16:19.919 { 00:16:19.919 "name": "BaseBdev3", 00:16:19.919 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:19.919 "is_configured": true, 00:16:19.919 "data_offset": 2048, 00:16:19.919 "data_size": 63488 00:16:19.919 }, 00:16:19.919 { 00:16:19.919 "name": "BaseBdev4", 00:16:19.919 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:19.919 "is_configured": true, 00:16:19.919 "data_offset": 2048, 00:16:19.919 "data_size": 63488 00:16:19.919 } 00:16:19.919 ] 00:16:19.919 }' 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.919 20:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.297 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.297 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.297 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.297 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.297 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.297 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.297 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.297 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.297 20:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.297 20:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.297 20:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.298 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.298 "name": "raid_bdev1", 00:16:21.298 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:21.298 "strip_size_kb": 0, 00:16:21.298 "state": "online", 00:16:21.298 "raid_level": "raid1", 00:16:21.298 "superblock": true, 00:16:21.298 "num_base_bdevs": 4, 00:16:21.298 "num_base_bdevs_discovered": 3, 00:16:21.298 "num_base_bdevs_operational": 3, 00:16:21.298 "process": { 00:16:21.298 "type": "rebuild", 00:16:21.298 "target": "spare", 00:16:21.298 "progress": { 00:16:21.298 "blocks": 51200, 00:16:21.298 "percent": 80 00:16:21.298 } 00:16:21.298 }, 00:16:21.298 "base_bdevs_list": [ 00:16:21.298 { 00:16:21.298 "name": "spare", 00:16:21.298 "uuid": "cf3f758f-2496-5d0f-a19e-60f9b3cf7d83", 00:16:21.298 "is_configured": true, 00:16:21.298 "data_offset": 2048, 00:16:21.298 "data_size": 63488 00:16:21.298 }, 00:16:21.298 { 00:16:21.298 "name": null, 00:16:21.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.298 "is_configured": false, 00:16:21.298 "data_offset": 0, 00:16:21.298 "data_size": 63488 00:16:21.298 }, 00:16:21.298 { 00:16:21.298 "name": "BaseBdev3", 00:16:21.298 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:21.298 "is_configured": true, 00:16:21.298 "data_offset": 2048, 00:16:21.298 "data_size": 63488 00:16:21.298 }, 00:16:21.298 { 00:16:21.298 "name": "BaseBdev4", 00:16:21.298 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:21.298 "is_configured": true, 00:16:21.298 "data_offset": 2048, 00:16:21.298 "data_size": 63488 00:16:21.298 } 00:16:21.298 ] 00:16:21.298 }' 00:16:21.298 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.298 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.298 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.298 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.298 20:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.557 [2024-10-17 20:13:07.126530] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:21.557 [2024-10-17 20:13:07.126652] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:21.557 [2024-10-17 20:13:07.126831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.123 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.123 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.123 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.123 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.123 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.123 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.123 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.123 20:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.123 20:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.123 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.123 20:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.123 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.123 "name": "raid_bdev1", 00:16:22.123 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:22.123 "strip_size_kb": 0, 00:16:22.123 "state": "online", 00:16:22.123 "raid_level": "raid1", 00:16:22.123 "superblock": true, 00:16:22.123 "num_base_bdevs": 4, 00:16:22.123 "num_base_bdevs_discovered": 3, 00:16:22.123 "num_base_bdevs_operational": 3, 00:16:22.123 "base_bdevs_list": [ 00:16:22.123 { 00:16:22.123 "name": "spare", 00:16:22.123 "uuid": "cf3f758f-2496-5d0f-a19e-60f9b3cf7d83", 00:16:22.123 "is_configured": true, 00:16:22.123 "data_offset": 2048, 00:16:22.123 "data_size": 63488 00:16:22.123 }, 00:16:22.123 { 00:16:22.123 "name": null, 00:16:22.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.123 "is_configured": false, 00:16:22.123 "data_offset": 0, 00:16:22.123 "data_size": 63488 00:16:22.123 }, 00:16:22.123 { 00:16:22.123 "name": "BaseBdev3", 00:16:22.123 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:22.123 "is_configured": true, 00:16:22.123 "data_offset": 2048, 00:16:22.123 "data_size": 63488 00:16:22.123 }, 00:16:22.123 { 00:16:22.123 "name": "BaseBdev4", 00:16:22.123 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:22.123 "is_configured": true, 00:16:22.123 "data_offset": 2048, 00:16:22.123 "data_size": 63488 00:16:22.123 } 00:16:22.123 ] 00:16:22.123 }' 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.382 "name": "raid_bdev1", 00:16:22.382 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:22.382 "strip_size_kb": 0, 00:16:22.382 "state": "online", 00:16:22.382 "raid_level": "raid1", 00:16:22.382 "superblock": true, 00:16:22.382 "num_base_bdevs": 4, 00:16:22.382 "num_base_bdevs_discovered": 3, 00:16:22.382 "num_base_bdevs_operational": 3, 00:16:22.382 "base_bdevs_list": [ 00:16:22.382 { 00:16:22.382 "name": "spare", 00:16:22.382 "uuid": "cf3f758f-2496-5d0f-a19e-60f9b3cf7d83", 00:16:22.382 "is_configured": true, 00:16:22.382 "data_offset": 2048, 00:16:22.382 "data_size": 63488 00:16:22.382 }, 00:16:22.382 { 00:16:22.382 "name": null, 00:16:22.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.382 "is_configured": false, 00:16:22.382 "data_offset": 0, 00:16:22.382 "data_size": 63488 00:16:22.382 }, 00:16:22.382 { 00:16:22.382 "name": "BaseBdev3", 00:16:22.382 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:22.382 "is_configured": true, 00:16:22.382 "data_offset": 2048, 00:16:22.382 "data_size": 63488 00:16:22.382 }, 00:16:22.382 { 00:16:22.382 "name": "BaseBdev4", 00:16:22.382 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:22.382 "is_configured": true, 00:16:22.382 "data_offset": 2048, 00:16:22.382 "data_size": 63488 00:16:22.382 } 00:16:22.382 ] 00:16:22.382 }' 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.382 20:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.382 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.382 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:22.382 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.382 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.382 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.382 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.382 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.382 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.382 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.382 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.641 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.641 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.641 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.641 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.641 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.641 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.641 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.641 "name": "raid_bdev1", 00:16:22.641 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:22.641 "strip_size_kb": 0, 00:16:22.641 "state": "online", 00:16:22.641 "raid_level": "raid1", 00:16:22.641 "superblock": true, 00:16:22.641 "num_base_bdevs": 4, 00:16:22.641 "num_base_bdevs_discovered": 3, 00:16:22.641 "num_base_bdevs_operational": 3, 00:16:22.641 "base_bdevs_list": [ 00:16:22.641 { 00:16:22.641 "name": "spare", 00:16:22.641 "uuid": "cf3f758f-2496-5d0f-a19e-60f9b3cf7d83", 00:16:22.641 "is_configured": true, 00:16:22.641 "data_offset": 2048, 00:16:22.641 "data_size": 63488 00:16:22.641 }, 00:16:22.641 { 00:16:22.641 "name": null, 00:16:22.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.641 "is_configured": false, 00:16:22.641 "data_offset": 0, 00:16:22.641 "data_size": 63488 00:16:22.641 }, 00:16:22.641 { 00:16:22.641 "name": "BaseBdev3", 00:16:22.641 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:22.641 "is_configured": true, 00:16:22.641 "data_offset": 2048, 00:16:22.641 "data_size": 63488 00:16:22.641 }, 00:16:22.641 { 00:16:22.641 "name": "BaseBdev4", 00:16:22.641 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:22.641 "is_configured": true, 00:16:22.641 "data_offset": 2048, 00:16:22.641 "data_size": 63488 00:16:22.641 } 00:16:22.641 ] 00:16:22.641 }' 00:16:22.641 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.641 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.900 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:22.900 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.900 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.900 [2024-10-17 20:13:08.547414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.900 [2024-10-17 20:13:08.547471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.900 [2024-10-17 20:13:08.547597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.900 [2024-10-17 20:13:08.547722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.900 [2024-10-17 20:13:08.547737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:22.900 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.159 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:23.418 /dev/nbd0 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.418 1+0 records in 00:16:23.418 1+0 records out 00:16:23.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024721 s, 16.6 MB/s 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.418 20:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:23.677 /dev/nbd1 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.677 1+0 records in 00:16:23.677 1+0 records out 00:16:23.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460506 s, 8.9 MB/s 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.677 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:23.936 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:23.936 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.936 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:23.936 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:23.936 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:23.936 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:23.936 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:24.195 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:24.195 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:24.195 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:24.195 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.195 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.195 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:24.195 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:24.195 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.195 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.195 20:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.763 [2024-10-17 20:13:10.153081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:24.763 [2024-10-17 20:13:10.153143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.763 [2024-10-17 20:13:10.153177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:24.763 [2024-10-17 20:13:10.153192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.763 [2024-10-17 20:13:10.156232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.763 [2024-10-17 20:13:10.156275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:24.763 [2024-10-17 20:13:10.156389] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:24.763 [2024-10-17 20:13:10.156496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.763 [2024-10-17 20:13:10.156733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:24.763 [2024-10-17 20:13:10.156890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:24.763 spare 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.763 [2024-10-17 20:13:10.257032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:24.763 [2024-10-17 20:13:10.257079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:24.763 [2024-10-17 20:13:10.257517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:24.763 [2024-10-17 20:13:10.257761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:24.763 [2024-10-17 20:13:10.257789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:24.763 [2024-10-17 20:13:10.258034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.763 "name": "raid_bdev1", 00:16:24.763 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:24.763 "strip_size_kb": 0, 00:16:24.763 "state": "online", 00:16:24.763 "raid_level": "raid1", 00:16:24.763 "superblock": true, 00:16:24.763 "num_base_bdevs": 4, 00:16:24.763 "num_base_bdevs_discovered": 3, 00:16:24.763 "num_base_bdevs_operational": 3, 00:16:24.763 "base_bdevs_list": [ 00:16:24.763 { 00:16:24.763 "name": "spare", 00:16:24.763 "uuid": "cf3f758f-2496-5d0f-a19e-60f9b3cf7d83", 00:16:24.763 "is_configured": true, 00:16:24.763 "data_offset": 2048, 00:16:24.763 "data_size": 63488 00:16:24.763 }, 00:16:24.763 { 00:16:24.763 "name": null, 00:16:24.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.763 "is_configured": false, 00:16:24.763 "data_offset": 2048, 00:16:24.763 "data_size": 63488 00:16:24.763 }, 00:16:24.763 { 00:16:24.763 "name": "BaseBdev3", 00:16:24.763 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:24.763 "is_configured": true, 00:16:24.763 "data_offset": 2048, 00:16:24.763 "data_size": 63488 00:16:24.763 }, 00:16:24.763 { 00:16:24.763 "name": "BaseBdev4", 00:16:24.763 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:24.763 "is_configured": true, 00:16:24.763 "data_offset": 2048, 00:16:24.763 "data_size": 63488 00:16:24.763 } 00:16:24.763 ] 00:16:24.763 }' 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.763 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.331 "name": "raid_bdev1", 00:16:25.331 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:25.331 "strip_size_kb": 0, 00:16:25.331 "state": "online", 00:16:25.331 "raid_level": "raid1", 00:16:25.331 "superblock": true, 00:16:25.331 "num_base_bdevs": 4, 00:16:25.331 "num_base_bdevs_discovered": 3, 00:16:25.331 "num_base_bdevs_operational": 3, 00:16:25.331 "base_bdevs_list": [ 00:16:25.331 { 00:16:25.331 "name": "spare", 00:16:25.331 "uuid": "cf3f758f-2496-5d0f-a19e-60f9b3cf7d83", 00:16:25.331 "is_configured": true, 00:16:25.331 "data_offset": 2048, 00:16:25.331 "data_size": 63488 00:16:25.331 }, 00:16:25.331 { 00:16:25.331 "name": null, 00:16:25.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.331 "is_configured": false, 00:16:25.331 "data_offset": 2048, 00:16:25.331 "data_size": 63488 00:16:25.331 }, 00:16:25.331 { 00:16:25.331 "name": "BaseBdev3", 00:16:25.331 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:25.331 "is_configured": true, 00:16:25.331 "data_offset": 2048, 00:16:25.331 "data_size": 63488 00:16:25.331 }, 00:16:25.331 { 00:16:25.331 "name": "BaseBdev4", 00:16:25.331 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:25.331 "is_configured": true, 00:16:25.331 "data_offset": 2048, 00:16:25.331 "data_size": 63488 00:16:25.331 } 00:16:25.331 ] 00:16:25.331 }' 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.331 20:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:25.590 20:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.590 [2024-10-17 20:13:11.026261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.590 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.590 "name": "raid_bdev1", 00:16:25.590 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:25.590 "strip_size_kb": 0, 00:16:25.590 "state": "online", 00:16:25.590 "raid_level": "raid1", 00:16:25.590 "superblock": true, 00:16:25.590 "num_base_bdevs": 4, 00:16:25.590 "num_base_bdevs_discovered": 2, 00:16:25.590 "num_base_bdevs_operational": 2, 00:16:25.590 "base_bdevs_list": [ 00:16:25.590 { 00:16:25.590 "name": null, 00:16:25.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.590 "is_configured": false, 00:16:25.590 "data_offset": 0, 00:16:25.590 "data_size": 63488 00:16:25.590 }, 00:16:25.590 { 00:16:25.590 "name": null, 00:16:25.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.590 "is_configured": false, 00:16:25.590 "data_offset": 2048, 00:16:25.590 "data_size": 63488 00:16:25.590 }, 00:16:25.590 { 00:16:25.590 "name": "BaseBdev3", 00:16:25.590 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:25.590 "is_configured": true, 00:16:25.590 "data_offset": 2048, 00:16:25.590 "data_size": 63488 00:16:25.590 }, 00:16:25.590 { 00:16:25.590 "name": "BaseBdev4", 00:16:25.590 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:25.590 "is_configured": true, 00:16:25.591 "data_offset": 2048, 00:16:25.591 "data_size": 63488 00:16:25.591 } 00:16:25.591 ] 00:16:25.591 }' 00:16:25.591 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.591 20:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.158 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:26.158 20:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.158 20:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.158 [2024-10-17 20:13:11.570490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.158 [2024-10-17 20:13:11.570768] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:26.158 [2024-10-17 20:13:11.570804] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:26.158 [2024-10-17 20:13:11.570856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.158 [2024-10-17 20:13:11.584186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:16:26.158 20:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.158 20:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:26.158 [2024-10-17 20:13:11.586984] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.095 "name": "raid_bdev1", 00:16:27.095 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:27.095 "strip_size_kb": 0, 00:16:27.095 "state": "online", 00:16:27.095 "raid_level": "raid1", 00:16:27.095 "superblock": true, 00:16:27.095 "num_base_bdevs": 4, 00:16:27.095 "num_base_bdevs_discovered": 3, 00:16:27.095 "num_base_bdevs_operational": 3, 00:16:27.095 "process": { 00:16:27.095 "type": "rebuild", 00:16:27.095 "target": "spare", 00:16:27.095 "progress": { 00:16:27.095 "blocks": 20480, 00:16:27.095 "percent": 32 00:16:27.095 } 00:16:27.095 }, 00:16:27.095 "base_bdevs_list": [ 00:16:27.095 { 00:16:27.095 "name": "spare", 00:16:27.095 "uuid": "cf3f758f-2496-5d0f-a19e-60f9b3cf7d83", 00:16:27.095 "is_configured": true, 00:16:27.095 "data_offset": 2048, 00:16:27.095 "data_size": 63488 00:16:27.095 }, 00:16:27.095 { 00:16:27.095 "name": null, 00:16:27.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.095 "is_configured": false, 00:16:27.095 "data_offset": 2048, 00:16:27.095 "data_size": 63488 00:16:27.095 }, 00:16:27.095 { 00:16:27.095 "name": "BaseBdev3", 00:16:27.095 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:27.095 "is_configured": true, 00:16:27.095 "data_offset": 2048, 00:16:27.095 "data_size": 63488 00:16:27.095 }, 00:16:27.095 { 00:16:27.095 "name": "BaseBdev4", 00:16:27.095 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:27.095 "is_configured": true, 00:16:27.095 "data_offset": 2048, 00:16:27.095 "data_size": 63488 00:16:27.095 } 00:16:27.095 ] 00:16:27.095 }' 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.095 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.354 [2024-10-17 20:13:12.752390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.354 [2024-10-17 20:13:12.796353] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:27.354 [2024-10-17 20:13:12.796455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.354 [2024-10-17 20:13:12.796514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.354 [2024-10-17 20:13:12.796525] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.354 "name": "raid_bdev1", 00:16:27.354 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:27.354 "strip_size_kb": 0, 00:16:27.354 "state": "online", 00:16:27.354 "raid_level": "raid1", 00:16:27.354 "superblock": true, 00:16:27.354 "num_base_bdevs": 4, 00:16:27.354 "num_base_bdevs_discovered": 2, 00:16:27.354 "num_base_bdevs_operational": 2, 00:16:27.354 "base_bdevs_list": [ 00:16:27.354 { 00:16:27.354 "name": null, 00:16:27.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.354 "is_configured": false, 00:16:27.354 "data_offset": 0, 00:16:27.354 "data_size": 63488 00:16:27.354 }, 00:16:27.354 { 00:16:27.354 "name": null, 00:16:27.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.354 "is_configured": false, 00:16:27.354 "data_offset": 2048, 00:16:27.354 "data_size": 63488 00:16:27.354 }, 00:16:27.354 { 00:16:27.354 "name": "BaseBdev3", 00:16:27.354 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:27.354 "is_configured": true, 00:16:27.354 "data_offset": 2048, 00:16:27.354 "data_size": 63488 00:16:27.354 }, 00:16:27.354 { 00:16:27.354 "name": "BaseBdev4", 00:16:27.354 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:27.354 "is_configured": true, 00:16:27.354 "data_offset": 2048, 00:16:27.354 "data_size": 63488 00:16:27.354 } 00:16:27.354 ] 00:16:27.354 }' 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.354 20:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.921 20:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:27.921 20:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.921 20:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.921 [2024-10-17 20:13:13.332353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:27.921 [2024-10-17 20:13:13.332432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.921 [2024-10-17 20:13:13.332474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:27.921 [2024-10-17 20:13:13.332491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.921 [2024-10-17 20:13:13.333233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.921 [2024-10-17 20:13:13.333274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:27.921 [2024-10-17 20:13:13.333407] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:27.921 [2024-10-17 20:13:13.333427] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:27.921 [2024-10-17 20:13:13.333444] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:27.921 [2024-10-17 20:13:13.333477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.921 [2024-10-17 20:13:13.347217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:16:27.921 spare 00:16:27.921 20:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.921 20:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:27.921 [2024-10-17 20:13:13.349841] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.856 "name": "raid_bdev1", 00:16:28.856 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:28.856 "strip_size_kb": 0, 00:16:28.856 "state": "online", 00:16:28.856 "raid_level": "raid1", 00:16:28.856 "superblock": true, 00:16:28.856 "num_base_bdevs": 4, 00:16:28.856 "num_base_bdevs_discovered": 3, 00:16:28.856 "num_base_bdevs_operational": 3, 00:16:28.856 "process": { 00:16:28.856 "type": "rebuild", 00:16:28.856 "target": "spare", 00:16:28.856 "progress": { 00:16:28.856 "blocks": 20480, 00:16:28.856 "percent": 32 00:16:28.856 } 00:16:28.856 }, 00:16:28.856 "base_bdevs_list": [ 00:16:28.856 { 00:16:28.856 "name": "spare", 00:16:28.856 "uuid": "cf3f758f-2496-5d0f-a19e-60f9b3cf7d83", 00:16:28.856 "is_configured": true, 00:16:28.856 "data_offset": 2048, 00:16:28.856 "data_size": 63488 00:16:28.856 }, 00:16:28.856 { 00:16:28.856 "name": null, 00:16:28.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.856 "is_configured": false, 00:16:28.856 "data_offset": 2048, 00:16:28.856 "data_size": 63488 00:16:28.856 }, 00:16:28.856 { 00:16:28.856 "name": "BaseBdev3", 00:16:28.856 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:28.856 "is_configured": true, 00:16:28.856 "data_offset": 2048, 00:16:28.856 "data_size": 63488 00:16:28.856 }, 00:16:28.856 { 00:16:28.856 "name": "BaseBdev4", 00:16:28.856 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:28.856 "is_configured": true, 00:16:28.856 "data_offset": 2048, 00:16:28.856 "data_size": 63488 00:16:28.856 } 00:16:28.856 ] 00:16:28.856 }' 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.856 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.114 [2024-10-17 20:13:14.514822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.114 [2024-10-17 20:13:14.558447] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:29.114 [2024-10-17 20:13:14.558545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.114 [2024-10-17 20:13:14.558570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.114 [2024-10-17 20:13:14.558584] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.114 "name": "raid_bdev1", 00:16:29.114 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:29.114 "strip_size_kb": 0, 00:16:29.114 "state": "online", 00:16:29.114 "raid_level": "raid1", 00:16:29.114 "superblock": true, 00:16:29.114 "num_base_bdevs": 4, 00:16:29.114 "num_base_bdevs_discovered": 2, 00:16:29.114 "num_base_bdevs_operational": 2, 00:16:29.114 "base_bdevs_list": [ 00:16:29.114 { 00:16:29.114 "name": null, 00:16:29.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.114 "is_configured": false, 00:16:29.114 "data_offset": 0, 00:16:29.114 "data_size": 63488 00:16:29.114 }, 00:16:29.114 { 00:16:29.114 "name": null, 00:16:29.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.114 "is_configured": false, 00:16:29.114 "data_offset": 2048, 00:16:29.114 "data_size": 63488 00:16:29.114 }, 00:16:29.114 { 00:16:29.114 "name": "BaseBdev3", 00:16:29.114 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:29.114 "is_configured": true, 00:16:29.114 "data_offset": 2048, 00:16:29.114 "data_size": 63488 00:16:29.114 }, 00:16:29.114 { 00:16:29.114 "name": "BaseBdev4", 00:16:29.114 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:29.114 "is_configured": true, 00:16:29.114 "data_offset": 2048, 00:16:29.114 "data_size": 63488 00:16:29.114 } 00:16:29.114 ] 00:16:29.114 }' 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.114 20:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.680 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.680 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.680 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.680 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.680 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.680 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.680 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.680 20:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.680 20:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.680 20:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.680 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.680 "name": "raid_bdev1", 00:16:29.680 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:29.680 "strip_size_kb": 0, 00:16:29.680 "state": "online", 00:16:29.680 "raid_level": "raid1", 00:16:29.680 "superblock": true, 00:16:29.680 "num_base_bdevs": 4, 00:16:29.680 "num_base_bdevs_discovered": 2, 00:16:29.680 "num_base_bdevs_operational": 2, 00:16:29.680 "base_bdevs_list": [ 00:16:29.680 { 00:16:29.680 "name": null, 00:16:29.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.680 "is_configured": false, 00:16:29.680 "data_offset": 0, 00:16:29.680 "data_size": 63488 00:16:29.680 }, 00:16:29.680 { 00:16:29.680 "name": null, 00:16:29.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.680 "is_configured": false, 00:16:29.680 "data_offset": 2048, 00:16:29.680 "data_size": 63488 00:16:29.680 }, 00:16:29.680 { 00:16:29.680 "name": "BaseBdev3", 00:16:29.680 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:29.680 "is_configured": true, 00:16:29.681 "data_offset": 2048, 00:16:29.681 "data_size": 63488 00:16:29.681 }, 00:16:29.681 { 00:16:29.681 "name": "BaseBdev4", 00:16:29.681 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:29.681 "is_configured": true, 00:16:29.681 "data_offset": 2048, 00:16:29.681 "data_size": 63488 00:16:29.681 } 00:16:29.681 ] 00:16:29.681 }' 00:16:29.681 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.681 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.681 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.681 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.681 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:29.681 20:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.681 20:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.681 20:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.681 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:29.681 20:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.681 20:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.681 [2024-10-17 20:13:15.270776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:29.681 [2024-10-17 20:13:15.270854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.681 [2024-10-17 20:13:15.270881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:29.681 [2024-10-17 20:13:15.270897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.681 [2024-10-17 20:13:15.271554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.681 [2024-10-17 20:13:15.271620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:29.681 [2024-10-17 20:13:15.271735] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:29.681 [2024-10-17 20:13:15.271769] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:29.681 [2024-10-17 20:13:15.271782] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:29.681 [2024-10-17 20:13:15.271812] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:29.681 BaseBdev1 00:16:29.681 20:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.681 20:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.062 "name": "raid_bdev1", 00:16:31.062 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:31.062 "strip_size_kb": 0, 00:16:31.062 "state": "online", 00:16:31.062 "raid_level": "raid1", 00:16:31.062 "superblock": true, 00:16:31.062 "num_base_bdevs": 4, 00:16:31.062 "num_base_bdevs_discovered": 2, 00:16:31.062 "num_base_bdevs_operational": 2, 00:16:31.062 "base_bdevs_list": [ 00:16:31.062 { 00:16:31.062 "name": null, 00:16:31.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.062 "is_configured": false, 00:16:31.062 "data_offset": 0, 00:16:31.062 "data_size": 63488 00:16:31.062 }, 00:16:31.062 { 00:16:31.062 "name": null, 00:16:31.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.062 "is_configured": false, 00:16:31.062 "data_offset": 2048, 00:16:31.062 "data_size": 63488 00:16:31.062 }, 00:16:31.062 { 00:16:31.062 "name": "BaseBdev3", 00:16:31.062 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:31.062 "is_configured": true, 00:16:31.062 "data_offset": 2048, 00:16:31.062 "data_size": 63488 00:16:31.062 }, 00:16:31.062 { 00:16:31.062 "name": "BaseBdev4", 00:16:31.062 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:31.062 "is_configured": true, 00:16:31.062 "data_offset": 2048, 00:16:31.062 "data_size": 63488 00:16:31.062 } 00:16:31.062 ] 00:16:31.062 }' 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.062 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.320 "name": "raid_bdev1", 00:16:31.320 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:31.320 "strip_size_kb": 0, 00:16:31.320 "state": "online", 00:16:31.320 "raid_level": "raid1", 00:16:31.320 "superblock": true, 00:16:31.320 "num_base_bdevs": 4, 00:16:31.320 "num_base_bdevs_discovered": 2, 00:16:31.320 "num_base_bdevs_operational": 2, 00:16:31.320 "base_bdevs_list": [ 00:16:31.320 { 00:16:31.320 "name": null, 00:16:31.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.320 "is_configured": false, 00:16:31.320 "data_offset": 0, 00:16:31.320 "data_size": 63488 00:16:31.320 }, 00:16:31.320 { 00:16:31.320 "name": null, 00:16:31.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.320 "is_configured": false, 00:16:31.320 "data_offset": 2048, 00:16:31.320 "data_size": 63488 00:16:31.320 }, 00:16:31.320 { 00:16:31.320 "name": "BaseBdev3", 00:16:31.320 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:31.320 "is_configured": true, 00:16:31.320 "data_offset": 2048, 00:16:31.320 "data_size": 63488 00:16:31.320 }, 00:16:31.320 { 00:16:31.320 "name": "BaseBdev4", 00:16:31.320 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:31.320 "is_configured": true, 00:16:31.320 "data_offset": 2048, 00:16:31.320 "data_size": 63488 00:16:31.320 } 00:16:31.320 ] 00:16:31.320 }' 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.320 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:31.578 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.578 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.578 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.578 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.578 [2024-10-17 20:13:16.979347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.578 [2024-10-17 20:13:16.979654] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:31.578 [2024-10-17 20:13:16.979707] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:31.578 request: 00:16:31.578 { 00:16:31.578 "base_bdev": "BaseBdev1", 00:16:31.578 "raid_bdev": "raid_bdev1", 00:16:31.578 "method": "bdev_raid_add_base_bdev", 00:16:31.578 "req_id": 1 00:16:31.578 } 00:16:31.578 Got JSON-RPC error response 00:16:31.578 response: 00:16:31.578 { 00:16:31.578 "code": -22, 00:16:31.578 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:31.578 } 00:16:31.578 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:31.578 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:31.578 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:31.578 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:31.578 20:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:31.578 20:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.510 20:13:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.510 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.510 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.510 "name": "raid_bdev1", 00:16:32.510 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:32.510 "strip_size_kb": 0, 00:16:32.510 "state": "online", 00:16:32.510 "raid_level": "raid1", 00:16:32.510 "superblock": true, 00:16:32.510 "num_base_bdevs": 4, 00:16:32.510 "num_base_bdevs_discovered": 2, 00:16:32.510 "num_base_bdevs_operational": 2, 00:16:32.510 "base_bdevs_list": [ 00:16:32.510 { 00:16:32.510 "name": null, 00:16:32.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.510 "is_configured": false, 00:16:32.510 "data_offset": 0, 00:16:32.510 "data_size": 63488 00:16:32.510 }, 00:16:32.510 { 00:16:32.510 "name": null, 00:16:32.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.510 "is_configured": false, 00:16:32.510 "data_offset": 2048, 00:16:32.510 "data_size": 63488 00:16:32.510 }, 00:16:32.510 { 00:16:32.510 "name": "BaseBdev3", 00:16:32.510 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:32.510 "is_configured": true, 00:16:32.510 "data_offset": 2048, 00:16:32.510 "data_size": 63488 00:16:32.510 }, 00:16:32.510 { 00:16:32.510 "name": "BaseBdev4", 00:16:32.510 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:32.510 "is_configured": true, 00:16:32.510 "data_offset": 2048, 00:16:32.510 "data_size": 63488 00:16:32.510 } 00:16:32.510 ] 00:16:32.510 }' 00:16:32.510 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.510 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.076 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.077 "name": "raid_bdev1", 00:16:33.077 "uuid": "b038b3e7-b850-4e48-a616-9fac6675b405", 00:16:33.077 "strip_size_kb": 0, 00:16:33.077 "state": "online", 00:16:33.077 "raid_level": "raid1", 00:16:33.077 "superblock": true, 00:16:33.077 "num_base_bdevs": 4, 00:16:33.077 "num_base_bdevs_discovered": 2, 00:16:33.077 "num_base_bdevs_operational": 2, 00:16:33.077 "base_bdevs_list": [ 00:16:33.077 { 00:16:33.077 "name": null, 00:16:33.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.077 "is_configured": false, 00:16:33.077 "data_offset": 0, 00:16:33.077 "data_size": 63488 00:16:33.077 }, 00:16:33.077 { 00:16:33.077 "name": null, 00:16:33.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.077 "is_configured": false, 00:16:33.077 "data_offset": 2048, 00:16:33.077 "data_size": 63488 00:16:33.077 }, 00:16:33.077 { 00:16:33.077 "name": "BaseBdev3", 00:16:33.077 "uuid": "2e570613-fe71-50fa-b64f-50ca911afc0c", 00:16:33.077 "is_configured": true, 00:16:33.077 "data_offset": 2048, 00:16:33.077 "data_size": 63488 00:16:33.077 }, 00:16:33.077 { 00:16:33.077 "name": "BaseBdev4", 00:16:33.077 "uuid": "57e854ca-aa13-551d-a413-41791f803279", 00:16:33.077 "is_configured": true, 00:16:33.077 "data_offset": 2048, 00:16:33.077 "data_size": 63488 00:16:33.077 } 00:16:33.077 ] 00:16:33.077 }' 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78170 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78170 ']' 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 78170 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78170 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:33.077 killing process with pid 78170 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78170' 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 78170 00:16:33.077 Received shutdown signal, test time was about 60.000000 seconds 00:16:33.077 00:16:33.077 Latency(us) 00:16:33.077 [2024-10-17T20:13:18.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.077 [2024-10-17T20:13:18.731Z] =================================================================================================================== 00:16:33.077 [2024-10-17T20:13:18.731Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:33.077 [2024-10-17 20:13:18.721765] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.077 20:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 78170 00:16:33.077 [2024-10-17 20:13:18.721930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.077 [2024-10-17 20:13:18.722043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.077 [2024-10-17 20:13:18.722062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:33.644 [2024-10-17 20:13:19.143179] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.629 20:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:34.629 00:16:34.629 real 0m29.325s 00:16:34.629 user 0m35.768s 00:16:34.629 sys 0m4.097s 00:16:34.629 20:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:34.629 20:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.629 ************************************ 00:16:34.629 END TEST raid_rebuild_test_sb 00:16:34.629 ************************************ 00:16:34.630 20:13:20 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:16:34.630 20:13:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:34.630 20:13:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:34.630 20:13:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.630 ************************************ 00:16:34.630 START TEST raid_rebuild_test_io 00:16:34.630 ************************************ 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78963 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78963 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 78963 ']' 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.630 20:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.888 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:34.888 Zero copy mechanism will not be used. 00:16:34.888 [2024-10-17 20:13:20.290413] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:16:34.888 [2024-10-17 20:13:20.290593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78963 ] 00:16:34.888 [2024-10-17 20:13:20.464619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.147 [2024-10-17 20:13:20.590540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.147 [2024-10-17 20:13:20.781280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.147 [2024-10-17 20:13:20.781377] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.714 BaseBdev1_malloc 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.714 [2024-10-17 20:13:21.347083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:35.714 [2024-10-17 20:13:21.347178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.714 [2024-10-17 20:13:21.347212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:35.714 [2024-10-17 20:13:21.347230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.714 [2024-10-17 20:13:21.350104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.714 [2024-10-17 20:13:21.350166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:35.714 BaseBdev1 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.714 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.973 BaseBdev2_malloc 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.973 [2024-10-17 20:13:21.395803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:35.973 [2024-10-17 20:13:21.395874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.973 [2024-10-17 20:13:21.395901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:35.973 [2024-10-17 20:13:21.395918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.973 [2024-10-17 20:13:21.398790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.973 [2024-10-17 20:13:21.398849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:35.973 BaseBdev2 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.973 BaseBdev3_malloc 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.973 [2024-10-17 20:13:21.454821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:35.973 [2024-10-17 20:13:21.454900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.973 [2024-10-17 20:13:21.454931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:35.973 [2024-10-17 20:13:21.454950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.973 [2024-10-17 20:13:21.457719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.973 [2024-10-17 20:13:21.457766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:35.973 BaseBdev3 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.973 BaseBdev4_malloc 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.973 [2024-10-17 20:13:21.503674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:35.973 [2024-10-17 20:13:21.503749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.973 [2024-10-17 20:13:21.503779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:35.973 [2024-10-17 20:13:21.503796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.973 [2024-10-17 20:13:21.506605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.973 [2024-10-17 20:13:21.506669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:35.973 BaseBdev4 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.973 spare_malloc 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.973 spare_delay 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.973 [2024-10-17 20:13:21.561194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.973 [2024-10-17 20:13:21.561266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.973 [2024-10-17 20:13:21.561296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:35.973 [2024-10-17 20:13:21.561313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.973 [2024-10-17 20:13:21.564063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.973 [2024-10-17 20:13:21.564120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.973 spare 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.973 [2024-10-17 20:13:21.569263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.973 [2024-10-17 20:13:21.571727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.973 [2024-10-17 20:13:21.571829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:35.973 [2024-10-17 20:13:21.571913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:35.973 [2024-10-17 20:13:21.572058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:35.973 [2024-10-17 20:13:21.572084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:35.973 [2024-10-17 20:13:21.572468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:35.973 [2024-10-17 20:13:21.572717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:35.973 [2024-10-17 20:13:21.572746] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:35.973 [2024-10-17 20:13:21.572965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.973 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.974 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.233 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.233 "name": "raid_bdev1", 00:16:36.233 "uuid": "de42088c-d0a7-4c82-b2de-063247ac7608", 00:16:36.233 "strip_size_kb": 0, 00:16:36.233 "state": "online", 00:16:36.233 "raid_level": "raid1", 00:16:36.233 "superblock": false, 00:16:36.233 "num_base_bdevs": 4, 00:16:36.233 "num_base_bdevs_discovered": 4, 00:16:36.233 "num_base_bdevs_operational": 4, 00:16:36.233 "base_bdevs_list": [ 00:16:36.233 { 00:16:36.233 "name": "BaseBdev1", 00:16:36.233 "uuid": "f36f6dfd-4e8d-56e1-839f-77e0f753f99b", 00:16:36.233 "is_configured": true, 00:16:36.233 "data_offset": 0, 00:16:36.233 "data_size": 65536 00:16:36.233 }, 00:16:36.233 { 00:16:36.233 "name": "BaseBdev2", 00:16:36.233 "uuid": "0128b5d1-8424-5670-b128-294a416e4dc4", 00:16:36.233 "is_configured": true, 00:16:36.233 "data_offset": 0, 00:16:36.233 "data_size": 65536 00:16:36.233 }, 00:16:36.233 { 00:16:36.233 "name": "BaseBdev3", 00:16:36.233 "uuid": "1ec37503-2d62-58a4-bc6a-01935a1a2b6e", 00:16:36.233 "is_configured": true, 00:16:36.233 "data_offset": 0, 00:16:36.233 "data_size": 65536 00:16:36.233 }, 00:16:36.233 { 00:16:36.233 "name": "BaseBdev4", 00:16:36.233 "uuid": "136f7603-97d3-501b-9cc9-27a3f50a61ab", 00:16:36.233 "is_configured": true, 00:16:36.233 "data_offset": 0, 00:16:36.233 "data_size": 65536 00:16:36.233 } 00:16:36.233 ] 00:16:36.233 }' 00:16:36.233 20:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.233 20:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.491 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.491 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.491 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.491 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:36.491 [2024-10-17 20:13:22.114292] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.491 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.750 [2024-10-17 20:13:22.221890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.750 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.750 "name": "raid_bdev1", 00:16:36.750 "uuid": "de42088c-d0a7-4c82-b2de-063247ac7608", 00:16:36.750 "strip_size_kb": 0, 00:16:36.750 "state": "online", 00:16:36.750 "raid_level": "raid1", 00:16:36.750 "superblock": false, 00:16:36.750 "num_base_bdevs": 4, 00:16:36.750 "num_base_bdevs_discovered": 3, 00:16:36.750 "num_base_bdevs_operational": 3, 00:16:36.750 "base_bdevs_list": [ 00:16:36.750 { 00:16:36.750 "name": null, 00:16:36.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.750 "is_configured": false, 00:16:36.750 "data_offset": 0, 00:16:36.751 "data_size": 65536 00:16:36.751 }, 00:16:36.751 { 00:16:36.751 "name": "BaseBdev2", 00:16:36.751 "uuid": "0128b5d1-8424-5670-b128-294a416e4dc4", 00:16:36.751 "is_configured": true, 00:16:36.751 "data_offset": 0, 00:16:36.751 "data_size": 65536 00:16:36.751 }, 00:16:36.751 { 00:16:36.751 "name": "BaseBdev3", 00:16:36.751 "uuid": "1ec37503-2d62-58a4-bc6a-01935a1a2b6e", 00:16:36.751 "is_configured": true, 00:16:36.751 "data_offset": 0, 00:16:36.751 "data_size": 65536 00:16:36.751 }, 00:16:36.751 { 00:16:36.751 "name": "BaseBdev4", 00:16:36.751 "uuid": "136f7603-97d3-501b-9cc9-27a3f50a61ab", 00:16:36.751 "is_configured": true, 00:16:36.751 "data_offset": 0, 00:16:36.751 "data_size": 65536 00:16:36.751 } 00:16:36.751 ] 00:16:36.751 }' 00:16:36.751 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.751 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.751 [2024-10-17 20:13:22.346317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:36.751 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:36.751 Zero copy mechanism will not be used. 00:16:36.751 Running I/O for 60 seconds... 00:16:37.318 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:37.318 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.318 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.318 [2024-10-17 20:13:22.756981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.318 20:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.318 20:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:37.318 [2024-10-17 20:13:22.850243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:37.318 [2024-10-17 20:13:22.852943] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.577 [2024-10-17 20:13:22.982108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:37.577 [2024-10-17 20:13:22.982915] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:37.577 [2024-10-17 20:13:23.111720] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:37.577 [2024-10-17 20:13:23.112701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:37.834 138.00 IOPS, 414.00 MiB/s [2024-10-17T20:13:23.488Z] [2024-10-17 20:13:23.454082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:37.834 [2024-10-17 20:13:23.455857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:38.092 [2024-10-17 20:13:23.668922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:38.092 [2024-10-17 20:13:23.669365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.378 "name": "raid_bdev1", 00:16:38.378 "uuid": "de42088c-d0a7-4c82-b2de-063247ac7608", 00:16:38.378 "strip_size_kb": 0, 00:16:38.378 "state": "online", 00:16:38.378 "raid_level": "raid1", 00:16:38.378 "superblock": false, 00:16:38.378 "num_base_bdevs": 4, 00:16:38.378 "num_base_bdevs_discovered": 4, 00:16:38.378 "num_base_bdevs_operational": 4, 00:16:38.378 "process": { 00:16:38.378 "type": "rebuild", 00:16:38.378 "target": "spare", 00:16:38.378 "progress": { 00:16:38.378 "blocks": 12288, 00:16:38.378 "percent": 18 00:16:38.378 } 00:16:38.378 }, 00:16:38.378 "base_bdevs_list": [ 00:16:38.378 { 00:16:38.378 "name": "spare", 00:16:38.378 "uuid": "09e318c8-c735-5791-87c2-9fe0cbb4833c", 00:16:38.378 "is_configured": true, 00:16:38.378 "data_offset": 0, 00:16:38.378 "data_size": 65536 00:16:38.378 }, 00:16:38.378 { 00:16:38.378 "name": "BaseBdev2", 00:16:38.378 "uuid": "0128b5d1-8424-5670-b128-294a416e4dc4", 00:16:38.378 "is_configured": true, 00:16:38.378 "data_offset": 0, 00:16:38.378 "data_size": 65536 00:16:38.378 }, 00:16:38.378 { 00:16:38.378 "name": "BaseBdev3", 00:16:38.378 "uuid": "1ec37503-2d62-58a4-bc6a-01935a1a2b6e", 00:16:38.378 "is_configured": true, 00:16:38.378 "data_offset": 0, 00:16:38.378 "data_size": 65536 00:16:38.378 }, 00:16:38.378 { 00:16:38.378 "name": "BaseBdev4", 00:16:38.378 "uuid": "136f7603-97d3-501b-9cc9-27a3f50a61ab", 00:16:38.378 "is_configured": true, 00:16:38.378 "data_offset": 0, 00:16:38.378 "data_size": 65536 00:16:38.378 } 00:16:38.378 ] 00:16:38.378 }' 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.378 20:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.378 [2024-10-17 20:13:23.984256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.646 [2024-10-17 20:13:24.063125] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.646 [2024-10-17 20:13:24.075861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.646 [2024-10-17 20:13:24.075942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.646 [2024-10-17 20:13:24.075976] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:38.646 [2024-10-17 20:13:24.115055] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.646 "name": "raid_bdev1", 00:16:38.646 "uuid": "de42088c-d0a7-4c82-b2de-063247ac7608", 00:16:38.646 "strip_size_kb": 0, 00:16:38.646 "state": "online", 00:16:38.646 "raid_level": "raid1", 00:16:38.646 "superblock": false, 00:16:38.646 "num_base_bdevs": 4, 00:16:38.646 "num_base_bdevs_discovered": 3, 00:16:38.646 "num_base_bdevs_operational": 3, 00:16:38.646 "base_bdevs_list": [ 00:16:38.646 { 00:16:38.646 "name": null, 00:16:38.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.646 "is_configured": false, 00:16:38.646 "data_offset": 0, 00:16:38.646 "data_size": 65536 00:16:38.646 }, 00:16:38.646 { 00:16:38.646 "name": "BaseBdev2", 00:16:38.646 "uuid": "0128b5d1-8424-5670-b128-294a416e4dc4", 00:16:38.646 "is_configured": true, 00:16:38.646 "data_offset": 0, 00:16:38.646 "data_size": 65536 00:16:38.646 }, 00:16:38.646 { 00:16:38.646 "name": "BaseBdev3", 00:16:38.646 "uuid": "1ec37503-2d62-58a4-bc6a-01935a1a2b6e", 00:16:38.646 "is_configured": true, 00:16:38.646 "data_offset": 0, 00:16:38.646 "data_size": 65536 00:16:38.646 }, 00:16:38.646 { 00:16:38.646 "name": "BaseBdev4", 00:16:38.646 "uuid": "136f7603-97d3-501b-9cc9-27a3f50a61ab", 00:16:38.646 "is_configured": true, 00:16:38.646 "data_offset": 0, 00:16:38.646 "data_size": 65536 00:16:38.646 } 00:16:38.646 ] 00:16:38.646 }' 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.646 20:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.164 118.50 IOPS, 355.50 MiB/s [2024-10-17T20:13:24.818Z] 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.164 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.164 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.164 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.164 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.164 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.164 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.164 20:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.164 20:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.164 20:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.164 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.164 "name": "raid_bdev1", 00:16:39.164 "uuid": "de42088c-d0a7-4c82-b2de-063247ac7608", 00:16:39.164 "strip_size_kb": 0, 00:16:39.164 "state": "online", 00:16:39.164 "raid_level": "raid1", 00:16:39.164 "superblock": false, 00:16:39.164 "num_base_bdevs": 4, 00:16:39.164 "num_base_bdevs_discovered": 3, 00:16:39.164 "num_base_bdevs_operational": 3, 00:16:39.164 "base_bdevs_list": [ 00:16:39.164 { 00:16:39.164 "name": null, 00:16:39.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.164 "is_configured": false, 00:16:39.164 "data_offset": 0, 00:16:39.164 "data_size": 65536 00:16:39.164 }, 00:16:39.164 { 00:16:39.164 "name": "BaseBdev2", 00:16:39.164 "uuid": "0128b5d1-8424-5670-b128-294a416e4dc4", 00:16:39.164 "is_configured": true, 00:16:39.164 "data_offset": 0, 00:16:39.164 "data_size": 65536 00:16:39.164 }, 00:16:39.164 { 00:16:39.164 "name": "BaseBdev3", 00:16:39.164 "uuid": "1ec37503-2d62-58a4-bc6a-01935a1a2b6e", 00:16:39.164 "is_configured": true, 00:16:39.164 "data_offset": 0, 00:16:39.164 "data_size": 65536 00:16:39.164 }, 00:16:39.164 { 00:16:39.164 "name": "BaseBdev4", 00:16:39.164 "uuid": "136f7603-97d3-501b-9cc9-27a3f50a61ab", 00:16:39.164 "is_configured": true, 00:16:39.164 "data_offset": 0, 00:16:39.164 "data_size": 65536 00:16:39.164 } 00:16:39.164 ] 00:16:39.164 }' 00:16:39.164 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.164 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.164 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.421 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.421 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:39.421 20:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.421 20:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.421 [2024-10-17 20:13:24.848412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.421 20:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.421 20:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:39.421 [2024-10-17 20:13:24.941970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:39.421 [2024-10-17 20:13:24.944716] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:39.421 [2024-10-17 20:13:25.072283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:39.678 [2024-10-17 20:13:25.218629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:39.678 [2024-10-17 20:13:25.219597] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:40.194 134.67 IOPS, 404.00 MiB/s [2024-10-17T20:13:25.848Z] [2024-10-17 20:13:25.588267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:40.194 [2024-10-17 20:13:25.810901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:40.194 [2024-10-17 20:13:25.811390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:40.452 20:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.452 20:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.452 20:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.452 20:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.453 20:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.453 20:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.453 20:13:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.453 20:13:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.453 20:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.453 20:13:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.453 20:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.453 "name": "raid_bdev1", 00:16:40.453 "uuid": "de42088c-d0a7-4c82-b2de-063247ac7608", 00:16:40.453 "strip_size_kb": 0, 00:16:40.453 "state": "online", 00:16:40.453 "raid_level": "raid1", 00:16:40.453 "superblock": false, 00:16:40.453 "num_base_bdevs": 4, 00:16:40.453 "num_base_bdevs_discovered": 4, 00:16:40.453 "num_base_bdevs_operational": 4, 00:16:40.453 "process": { 00:16:40.453 "type": "rebuild", 00:16:40.453 "target": "spare", 00:16:40.453 "progress": { 00:16:40.453 "blocks": 10240, 00:16:40.453 "percent": 15 00:16:40.453 } 00:16:40.453 }, 00:16:40.453 "base_bdevs_list": [ 00:16:40.453 { 00:16:40.453 "name": "spare", 00:16:40.453 "uuid": "09e318c8-c735-5791-87c2-9fe0cbb4833c", 00:16:40.453 "is_configured": true, 00:16:40.453 "data_offset": 0, 00:16:40.453 "data_size": 65536 00:16:40.453 }, 00:16:40.453 { 00:16:40.453 "name": "BaseBdev2", 00:16:40.453 "uuid": "0128b5d1-8424-5670-b128-294a416e4dc4", 00:16:40.453 "is_configured": true, 00:16:40.453 "data_offset": 0, 00:16:40.453 "data_size": 65536 00:16:40.453 }, 00:16:40.453 { 00:16:40.453 "name": "BaseBdev3", 00:16:40.453 "uuid": "1ec37503-2d62-58a4-bc6a-01935a1a2b6e", 00:16:40.453 "is_configured": true, 00:16:40.453 "data_offset": 0, 00:16:40.453 "data_size": 65536 00:16:40.453 }, 00:16:40.453 { 00:16:40.453 "name": "BaseBdev4", 00:16:40.453 "uuid": "136f7603-97d3-501b-9cc9-27a3f50a61ab", 00:16:40.453 "is_configured": true, 00:16:40.453 "data_offset": 0, 00:16:40.453 "data_size": 65536 00:16:40.453 } 00:16:40.453 ] 00:16:40.453 }' 00:16:40.453 20:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.453 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.453 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.453 [2024-10-17 20:13:26.044843] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:40.453 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.453 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:40.453 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:40.453 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:40.453 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:40.453 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:40.453 20:13:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.453 20:13:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.453 [2024-10-17 20:13:26.100967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.712 [2024-10-17 20:13:26.166643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:40.712 [2024-10-17 20:13:26.274122] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:40.712 [2024-10-17 20:13:26.274213] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:40.712 [2024-10-17 20:13:26.286263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.712 "name": "raid_bdev1", 00:16:40.712 "uuid": "de42088c-d0a7-4c82-b2de-063247ac7608", 00:16:40.712 "strip_size_kb": 0, 00:16:40.712 "state": "online", 00:16:40.712 "raid_level": "raid1", 00:16:40.712 "superblock": false, 00:16:40.712 "num_base_bdevs": 4, 00:16:40.712 "num_base_bdevs_discovered": 3, 00:16:40.712 "num_base_bdevs_operational": 3, 00:16:40.712 "process": { 00:16:40.712 "type": "rebuild", 00:16:40.712 "target": "spare", 00:16:40.712 "progress": { 00:16:40.712 "blocks": 16384, 00:16:40.712 "percent": 25 00:16:40.712 } 00:16:40.712 }, 00:16:40.712 "base_bdevs_list": [ 00:16:40.712 { 00:16:40.712 "name": "spare", 00:16:40.712 "uuid": "09e318c8-c735-5791-87c2-9fe0cbb4833c", 00:16:40.712 "is_configured": true, 00:16:40.712 "data_offset": 0, 00:16:40.712 "data_size": 65536 00:16:40.712 }, 00:16:40.712 { 00:16:40.712 "name": null, 00:16:40.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.712 "is_configured": false, 00:16:40.712 "data_offset": 0, 00:16:40.712 "data_size": 65536 00:16:40.712 }, 00:16:40.712 { 00:16:40.712 "name": "BaseBdev3", 00:16:40.712 "uuid": "1ec37503-2d62-58a4-bc6a-01935a1a2b6e", 00:16:40.712 "is_configured": true, 00:16:40.712 "data_offset": 0, 00:16:40.712 "data_size": 65536 00:16:40.712 }, 00:16:40.712 { 00:16:40.712 "name": "BaseBdev4", 00:16:40.712 "uuid": "136f7603-97d3-501b-9cc9-27a3f50a61ab", 00:16:40.712 "is_configured": true, 00:16:40.712 "data_offset": 0, 00:16:40.712 "data_size": 65536 00:16:40.712 } 00:16:40.712 ] 00:16:40.712 }' 00:16:40.712 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.971 112.25 IOPS, 336.75 MiB/s [2024-10-17T20:13:26.625Z] 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=521 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.971 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.971 "name": "raid_bdev1", 00:16:40.971 "uuid": "de42088c-d0a7-4c82-b2de-063247ac7608", 00:16:40.971 "strip_size_kb": 0, 00:16:40.971 "state": "online", 00:16:40.972 "raid_level": "raid1", 00:16:40.972 "superblock": false, 00:16:40.972 "num_base_bdevs": 4, 00:16:40.972 "num_base_bdevs_discovered": 3, 00:16:40.972 "num_base_bdevs_operational": 3, 00:16:40.972 "process": { 00:16:40.972 "type": "rebuild", 00:16:40.972 "target": "spare", 00:16:40.972 "progress": { 00:16:40.972 "blocks": 18432, 00:16:40.972 "percent": 28 00:16:40.972 } 00:16:40.972 }, 00:16:40.972 "base_bdevs_list": [ 00:16:40.972 { 00:16:40.972 "name": "spare", 00:16:40.972 "uuid": "09e318c8-c735-5791-87c2-9fe0cbb4833c", 00:16:40.972 "is_configured": true, 00:16:40.972 "data_offset": 0, 00:16:40.972 "data_size": 65536 00:16:40.972 }, 00:16:40.972 { 00:16:40.972 "name": null, 00:16:40.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.972 "is_configured": false, 00:16:40.972 "data_offset": 0, 00:16:40.972 "data_size": 65536 00:16:40.972 }, 00:16:40.972 { 00:16:40.972 "name": "BaseBdev3", 00:16:40.972 "uuid": "1ec37503-2d62-58a4-bc6a-01935a1a2b6e", 00:16:40.972 "is_configured": true, 00:16:40.972 "data_offset": 0, 00:16:40.972 "data_size": 65536 00:16:40.972 }, 00:16:40.972 { 00:16:40.972 "name": "BaseBdev4", 00:16:40.972 "uuid": "136f7603-97d3-501b-9cc9-27a3f50a61ab", 00:16:40.972 "is_configured": true, 00:16:40.972 "data_offset": 0, 00:16:40.972 "data_size": 65536 00:16:40.972 } 00:16:40.972 ] 00:16:40.972 }' 00:16:40.972 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.972 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.972 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.972 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.972 20:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:41.230 [2024-10-17 20:13:26.631169] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:41.488 [2024-10-17 20:13:27.090182] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:41.488 [2024-10-17 20:13:27.091150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:42.007 99.80 IOPS, 299.40 MiB/s [2024-10-17T20:13:27.662Z] [2024-10-17 20:13:27.435779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:42.008 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.008 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.008 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.008 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.008 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.008 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.008 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.008 20:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.008 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.008 20:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.008 20:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.266 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.266 "name": "raid_bdev1", 00:16:42.266 "uuid": "de42088c-d0a7-4c82-b2de-063247ac7608", 00:16:42.266 "strip_size_kb": 0, 00:16:42.266 "state": "online", 00:16:42.266 "raid_level": "raid1", 00:16:42.266 "superblock": false, 00:16:42.266 "num_base_bdevs": 4, 00:16:42.266 "num_base_bdevs_discovered": 3, 00:16:42.266 "num_base_bdevs_operational": 3, 00:16:42.266 "process": { 00:16:42.266 "type": "rebuild", 00:16:42.266 "target": "spare", 00:16:42.266 "progress": { 00:16:42.266 "blocks": 32768, 00:16:42.266 "percent": 50 00:16:42.266 } 00:16:42.266 }, 00:16:42.266 "base_bdevs_list": [ 00:16:42.266 { 00:16:42.266 "name": "spare", 00:16:42.266 "uuid": "09e318c8-c735-5791-87c2-9fe0cbb4833c", 00:16:42.266 "is_configured": true, 00:16:42.266 "data_offset": 0, 00:16:42.266 "data_size": 65536 00:16:42.266 }, 00:16:42.266 { 00:16:42.266 "name": null, 00:16:42.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.266 "is_configured": false, 00:16:42.266 "data_offset": 0, 00:16:42.266 "data_size": 65536 00:16:42.266 }, 00:16:42.266 { 00:16:42.266 "name": "BaseBdev3", 00:16:42.266 "uuid": "1ec37503-2d62-58a4-bc6a-01935a1a2b6e", 00:16:42.266 "is_configured": true, 00:16:42.266 "data_offset": 0, 00:16:42.266 "data_size": 65536 00:16:42.266 }, 00:16:42.266 { 00:16:42.266 "name": "BaseBdev4", 00:16:42.266 "uuid": "136f7603-97d3-501b-9cc9-27a3f50a61ab", 00:16:42.266 "is_configured": true, 00:16:42.266 "data_offset": 0, 00:16:42.266 "data_size": 65536 00:16:42.266 } 00:16:42.266 ] 00:16:42.266 }' 00:16:42.266 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.266 [2024-10-17 20:13:27.680585] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:42.266 [2024-10-17 20:13:27.681263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:42.266 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.266 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.266 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.266 20:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.401 88.67 IOPS, 266.00 MiB/s [2024-10-17T20:13:29.055Z] 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.401 "name": "raid_bdev1", 00:16:43.401 "uuid": "de42088c-d0a7-4c82-b2de-063247ac7608", 00:16:43.401 "strip_size_kb": 0, 00:16:43.401 "state": "online", 00:16:43.401 "raid_level": "raid1", 00:16:43.401 "superblock": false, 00:16:43.401 "num_base_bdevs": 4, 00:16:43.401 "num_base_bdevs_discovered": 3, 00:16:43.401 "num_base_bdevs_operational": 3, 00:16:43.401 "process": { 00:16:43.401 "type": "rebuild", 00:16:43.401 "target": "spare", 00:16:43.401 "progress": { 00:16:43.401 "blocks": 51200, 00:16:43.401 "percent": 78 00:16:43.401 } 00:16:43.401 }, 00:16:43.401 "base_bdevs_list": [ 00:16:43.401 { 00:16:43.401 "name": "spare", 00:16:43.401 "uuid": "09e318c8-c735-5791-87c2-9fe0cbb4833c", 00:16:43.401 "is_configured": true, 00:16:43.401 "data_offset": 0, 00:16:43.401 "data_size": 65536 00:16:43.401 }, 00:16:43.401 { 00:16:43.401 "name": null, 00:16:43.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.401 "is_configured": false, 00:16:43.401 "data_offset": 0, 00:16:43.401 "data_size": 65536 00:16:43.401 }, 00:16:43.401 { 00:16:43.401 "name": "BaseBdev3", 00:16:43.401 "uuid": "1ec37503-2d62-58a4-bc6a-01935a1a2b6e", 00:16:43.401 "is_configured": true, 00:16:43.401 "data_offset": 0, 00:16:43.401 "data_size": 65536 00:16:43.401 }, 00:16:43.401 { 00:16:43.401 "name": "BaseBdev4", 00:16:43.401 "uuid": "136f7603-97d3-501b-9cc9-27a3f50a61ab", 00:16:43.401 "is_configured": true, 00:16:43.401 "data_offset": 0, 00:16:43.401 "data_size": 65536 00:16:43.401 } 00:16:43.401 ] 00:16:43.401 }' 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.401 20:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.967 81.00 IOPS, 243.00 MiB/s [2024-10-17T20:13:29.621Z] [2024-10-17 20:13:29.480656] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:43.967 [2024-10-17 20:13:29.580699] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:43.967 [2024-10-17 20:13:29.583942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.533 20:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.533 20:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.533 20:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.533 20:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.533 20:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.533 20:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.533 20:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.533 20:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.533 20:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.533 20:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.533 20:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.533 20:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.533 "name": "raid_bdev1", 00:16:44.533 "uuid": "de42088c-d0a7-4c82-b2de-063247ac7608", 00:16:44.533 "strip_size_kb": 0, 00:16:44.533 "state": "online", 00:16:44.533 "raid_level": "raid1", 00:16:44.533 "superblock": false, 00:16:44.533 "num_base_bdevs": 4, 00:16:44.533 "num_base_bdevs_discovered": 3, 00:16:44.533 "num_base_bdevs_operational": 3, 00:16:44.533 "base_bdevs_list": [ 00:16:44.533 { 00:16:44.533 "name": "spare", 00:16:44.533 "uuid": "09e318c8-c735-5791-87c2-9fe0cbb4833c", 00:16:44.533 "is_configured": true, 00:16:44.533 "data_offset": 0, 00:16:44.533 "data_size": 65536 00:16:44.533 }, 00:16:44.533 { 00:16:44.533 "name": null, 00:16:44.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.533 "is_configured": false, 00:16:44.533 "data_offset": 0, 00:16:44.533 "data_size": 65536 00:16:44.533 }, 00:16:44.533 { 00:16:44.533 "name": "BaseBdev3", 00:16:44.533 "uuid": "1ec37503-2d62-58a4-bc6a-01935a1a2b6e", 00:16:44.533 "is_configured": true, 00:16:44.533 "data_offset": 0, 00:16:44.533 "data_size": 65536 00:16:44.533 }, 00:16:44.533 { 00:16:44.533 "name": "BaseBdev4", 00:16:44.533 "uuid": "136f7603-97d3-501b-9cc9-27a3f50a61ab", 00:16:44.533 "is_configured": true, 00:16:44.533 "data_offset": 0, 00:16:44.533 "data_size": 65536 00:16:44.533 } 00:16:44.533 ] 00:16:44.533 }' 00:16:44.533 20:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.533 "name": "raid_bdev1", 00:16:44.533 "uuid": "de42088c-d0a7-4c82-b2de-063247ac7608", 00:16:44.533 "strip_size_kb": 0, 00:16:44.533 "state": "online", 00:16:44.533 "raid_level": "raid1", 00:16:44.533 "superblock": false, 00:16:44.533 "num_base_bdevs": 4, 00:16:44.533 "num_base_bdevs_discovered": 3, 00:16:44.533 "num_base_bdevs_operational": 3, 00:16:44.533 "base_bdevs_list": [ 00:16:44.533 { 00:16:44.533 "name": "spare", 00:16:44.533 "uuid": "09e318c8-c735-5791-87c2-9fe0cbb4833c", 00:16:44.533 "is_configured": true, 00:16:44.533 "data_offset": 0, 00:16:44.533 "data_size": 65536 00:16:44.533 }, 00:16:44.533 { 00:16:44.533 "name": null, 00:16:44.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.533 "is_configured": false, 00:16:44.533 "data_offset": 0, 00:16:44.533 "data_size": 65536 00:16:44.533 }, 00:16:44.533 { 00:16:44.533 "name": "BaseBdev3", 00:16:44.533 "uuid": "1ec37503-2d62-58a4-bc6a-01935a1a2b6e", 00:16:44.533 "is_configured": true, 00:16:44.533 "data_offset": 0, 00:16:44.533 "data_size": 65536 00:16:44.533 }, 00:16:44.533 { 00:16:44.533 "name": "BaseBdev4", 00:16:44.533 "uuid": "136f7603-97d3-501b-9cc9-27a3f50a61ab", 00:16:44.533 "is_configured": true, 00:16:44.533 "data_offset": 0, 00:16:44.533 "data_size": 65536 00:16:44.533 } 00:16:44.533 ] 00:16:44.533 }' 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.533 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.792 "name": "raid_bdev1", 00:16:44.792 "uuid": "de42088c-d0a7-4c82-b2de-063247ac7608", 00:16:44.792 "strip_size_kb": 0, 00:16:44.792 "state": "online", 00:16:44.792 "raid_level": "raid1", 00:16:44.792 "superblock": false, 00:16:44.792 "num_base_bdevs": 4, 00:16:44.792 "num_base_bdevs_discovered": 3, 00:16:44.792 "num_base_bdevs_operational": 3, 00:16:44.792 "base_bdevs_list": [ 00:16:44.792 { 00:16:44.792 "name": "spare", 00:16:44.792 "uuid": "09e318c8-c735-5791-87c2-9fe0cbb4833c", 00:16:44.792 "is_configured": true, 00:16:44.792 "data_offset": 0, 00:16:44.792 "data_size": 65536 00:16:44.792 }, 00:16:44.792 { 00:16:44.792 "name": null, 00:16:44.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.792 "is_configured": false, 00:16:44.792 "data_offset": 0, 00:16:44.792 "data_size": 65536 00:16:44.792 }, 00:16:44.792 { 00:16:44.792 "name": "BaseBdev3", 00:16:44.792 "uuid": "1ec37503-2d62-58a4-bc6a-01935a1a2b6e", 00:16:44.792 "is_configured": true, 00:16:44.792 "data_offset": 0, 00:16:44.792 "data_size": 65536 00:16:44.792 }, 00:16:44.792 { 00:16:44.792 "name": "BaseBdev4", 00:16:44.792 "uuid": "136f7603-97d3-501b-9cc9-27a3f50a61ab", 00:16:44.792 "is_configured": true, 00:16:44.792 "data_offset": 0, 00:16:44.792 "data_size": 65536 00:16:44.792 } 00:16:44.792 ] 00:16:44.792 }' 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.792 20:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.363 75.50 IOPS, 226.50 MiB/s [2024-10-17T20:13:31.017Z] 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:45.363 20:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.363 20:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.364 [2024-10-17 20:13:30.756666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.364 [2024-10-17 20:13:30.756707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.364 00:16:45.364 Latency(us) 00:16:45.364 [2024-10-17T20:13:31.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.364 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:45.364 raid_bdev1 : 8.46 72.55 217.64 0.00 0.00 18761.20 262.52 116773.24 00:16:45.364 [2024-10-17T20:13:31.018Z] =================================================================================================================== 00:16:45.364 [2024-10-17T20:13:31.018Z] Total : 72.55 217.64 0.00 0.00 18761.20 262.52 116773.24 00:16:45.364 [2024-10-17 20:13:30.831653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.364 [2024-10-17 20:13:30.831749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.364 { 00:16:45.364 "results": [ 00:16:45.364 { 00:16:45.364 "job": "raid_bdev1", 00:16:45.364 "core_mask": "0x1", 00:16:45.364 "workload": "randrw", 00:16:45.364 "percentage": 50, 00:16:45.364 "status": "finished", 00:16:45.364 "queue_depth": 2, 00:16:45.364 "io_size": 3145728, 00:16:45.364 "runtime": 8.463527, 00:16:45.364 "iops": 72.54658725611675, 00:16:45.364 "mibps": 217.63976176835024, 00:16:45.364 "io_failed": 0, 00:16:45.364 "io_timeout": 0, 00:16:45.364 "avg_latency_us": 18761.196138584542, 00:16:45.364 "min_latency_us": 262.5163636363636, 00:16:45.364 "max_latency_us": 116773.23636363636 00:16:45.364 } 00:16:45.364 ], 00:16:45.364 "core_count": 1 00:16:45.364 } 00:16:45.364 [2024-10-17 20:13:30.831895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.364 [2024-10-17 20:13:30.831912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:45.364 20:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:45.621 /dev/nbd0 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:45.621 1+0 records in 00:16:45.621 1+0 records out 00:16:45.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415895 s, 9.8 MB/s 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:45.621 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:45.879 /dev/nbd1 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.138 1+0 records in 00:16:46.138 1+0 records out 00:16:46.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331398 s, 12.4 MB/s 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.138 20:13:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:46.705 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:46.964 /dev/nbd1 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.964 1+0 records in 00:16:46.964 1+0 records out 00:16:46.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034741 s, 11.8 MB/s 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.964 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.223 20:13:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78963 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 78963 ']' 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 78963 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78963 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:47.482 killing process with pid 78963 00:16:47.482 Received shutdown signal, test time was about 10.739679 seconds 00:16:47.482 00:16:47.482 Latency(us) 00:16:47.482 [2024-10-17T20:13:33.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.482 [2024-10-17T20:13:33.136Z] =================================================================================================================== 00:16:47.482 [2024-10-17T20:13:33.136Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78963' 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 78963 00:16:47.482 20:13:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 78963 00:16:47.482 [2024-10-17 20:13:33.088735] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.049 [2024-10-17 20:13:33.446166] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:48.985 00:16:48.985 real 0m14.275s 00:16:48.985 user 0m18.858s 00:16:48.985 sys 0m1.852s 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.985 ************************************ 00:16:48.985 END TEST raid_rebuild_test_io 00:16:48.985 ************************************ 00:16:48.985 20:13:34 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:16:48.985 20:13:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:48.985 20:13:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:48.985 20:13:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.985 ************************************ 00:16:48.985 START TEST raid_rebuild_test_sb_io 00:16:48.985 ************************************ 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:48.985 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:48.986 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:48.986 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:48.986 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:48.986 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:48.986 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79383 00:16:48.986 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79383 00:16:48.986 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:48.986 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 79383 ']' 00:16:48.986 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.986 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.986 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.986 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.986 20:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.986 [2024-10-17 20:13:34.621333] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:16:48.986 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:48.986 Zero copy mechanism will not be used. 00:16:48.986 [2024-10-17 20:13:34.621560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79383 ] 00:16:49.244 [2024-10-17 20:13:34.795949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.502 [2024-10-17 20:13:34.921231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.502 [2024-10-17 20:13:35.103527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.502 [2024-10-17 20:13:35.103605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.069 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.069 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:16:50.069 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.069 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:50.069 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.069 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.069 BaseBdev1_malloc 00:16:50.069 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.069 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:50.069 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.069 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.327 [2024-10-17 20:13:35.722820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:50.328 [2024-10-17 20:13:35.722934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.328 [2024-10-17 20:13:35.722966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:50.328 [2024-10-17 20:13:35.722986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.328 [2024-10-17 20:13:35.725883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.328 [2024-10-17 20:13:35.725962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:50.328 BaseBdev1 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.328 BaseBdev2_malloc 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.328 [2024-10-17 20:13:35.768302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:50.328 [2024-10-17 20:13:35.768384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.328 [2024-10-17 20:13:35.768412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:50.328 [2024-10-17 20:13:35.768429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.328 [2024-10-17 20:13:35.771158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.328 [2024-10-17 20:13:35.771218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:50.328 BaseBdev2 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.328 BaseBdev3_malloc 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.328 [2024-10-17 20:13:35.827236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:50.328 [2024-10-17 20:13:35.827306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.328 [2024-10-17 20:13:35.827338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:50.328 [2024-10-17 20:13:35.827357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.328 [2024-10-17 20:13:35.830145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.328 [2024-10-17 20:13:35.830198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:50.328 BaseBdev3 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.328 BaseBdev4_malloc 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.328 [2024-10-17 20:13:35.879421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:50.328 [2024-10-17 20:13:35.879549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.328 [2024-10-17 20:13:35.879613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:50.328 [2024-10-17 20:13:35.879631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.328 [2024-10-17 20:13:35.882571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.328 [2024-10-17 20:13:35.882650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:50.328 BaseBdev4 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.328 spare_malloc 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.328 spare_delay 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.328 [2024-10-17 20:13:35.942405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:50.328 [2024-10-17 20:13:35.942499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.328 [2024-10-17 20:13:35.942531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:50.328 [2024-10-17 20:13:35.942549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.328 [2024-10-17 20:13:35.945357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.328 [2024-10-17 20:13:35.945606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:50.328 spare 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.328 [2024-10-17 20:13:35.950553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.328 [2024-10-17 20:13:35.952878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.328 [2024-10-17 20:13:35.952959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.328 [2024-10-17 20:13:35.953063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:50.328 [2024-10-17 20:13:35.953496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:50.328 [2024-10-17 20:13:35.953696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:50.328 [2024-10-17 20:13:35.954066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:50.328 [2024-10-17 20:13:35.954463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:50.328 [2024-10-17 20:13:35.954601] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:50.328 [2024-10-17 20:13:35.954951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.328 20:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.587 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.587 "name": "raid_bdev1", 00:16:50.587 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:16:50.587 "strip_size_kb": 0, 00:16:50.587 "state": "online", 00:16:50.587 "raid_level": "raid1", 00:16:50.587 "superblock": true, 00:16:50.587 "num_base_bdevs": 4, 00:16:50.587 "num_base_bdevs_discovered": 4, 00:16:50.587 "num_base_bdevs_operational": 4, 00:16:50.587 "base_bdevs_list": [ 00:16:50.587 { 00:16:50.587 "name": "BaseBdev1", 00:16:50.587 "uuid": "8cf8a613-9f64-5127-a08e-6365c53736a0", 00:16:50.587 "is_configured": true, 00:16:50.587 "data_offset": 2048, 00:16:50.587 "data_size": 63488 00:16:50.587 }, 00:16:50.587 { 00:16:50.587 "name": "BaseBdev2", 00:16:50.587 "uuid": "aa7c4517-5274-5112-823e-6ffe88f299f2", 00:16:50.587 "is_configured": true, 00:16:50.587 "data_offset": 2048, 00:16:50.587 "data_size": 63488 00:16:50.587 }, 00:16:50.587 { 00:16:50.587 "name": "BaseBdev3", 00:16:50.587 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:16:50.587 "is_configured": true, 00:16:50.587 "data_offset": 2048, 00:16:50.587 "data_size": 63488 00:16:50.587 }, 00:16:50.587 { 00:16:50.587 "name": "BaseBdev4", 00:16:50.587 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:16:50.587 "is_configured": true, 00:16:50.587 "data_offset": 2048, 00:16:50.587 "data_size": 63488 00:16:50.587 } 00:16:50.587 ] 00:16:50.587 }' 00:16:50.587 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.587 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.846 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.846 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.846 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.846 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:50.846 [2024-10-17 20:13:36.491550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.104 [2024-10-17 20:13:36.603064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.104 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.105 "name": "raid_bdev1", 00:16:51.105 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:16:51.105 "strip_size_kb": 0, 00:16:51.105 "state": "online", 00:16:51.105 "raid_level": "raid1", 00:16:51.105 "superblock": true, 00:16:51.105 "num_base_bdevs": 4, 00:16:51.105 "num_base_bdevs_discovered": 3, 00:16:51.105 "num_base_bdevs_operational": 3, 00:16:51.105 "base_bdevs_list": [ 00:16:51.105 { 00:16:51.105 "name": null, 00:16:51.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.105 "is_configured": false, 00:16:51.105 "data_offset": 0, 00:16:51.105 "data_size": 63488 00:16:51.105 }, 00:16:51.105 { 00:16:51.105 "name": "BaseBdev2", 00:16:51.105 "uuid": "aa7c4517-5274-5112-823e-6ffe88f299f2", 00:16:51.105 "is_configured": true, 00:16:51.105 "data_offset": 2048, 00:16:51.105 "data_size": 63488 00:16:51.105 }, 00:16:51.105 { 00:16:51.105 "name": "BaseBdev3", 00:16:51.105 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:16:51.105 "is_configured": true, 00:16:51.105 "data_offset": 2048, 00:16:51.105 "data_size": 63488 00:16:51.105 }, 00:16:51.105 { 00:16:51.105 "name": "BaseBdev4", 00:16:51.105 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:16:51.105 "is_configured": true, 00:16:51.105 "data_offset": 2048, 00:16:51.105 "data_size": 63488 00:16:51.105 } 00:16:51.105 ] 00:16:51.105 }' 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.105 20:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.105 [2024-10-17 20:13:36.731132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:51.105 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:51.105 Zero copy mechanism will not be used. 00:16:51.105 Running I/O for 60 seconds... 00:16:51.672 20:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:51.672 20:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.672 20:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.672 [2024-10-17 20:13:37.152353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:51.672 20:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.672 20:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:51.672 [2024-10-17 20:13:37.231359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:51.672 [2024-10-17 20:13:37.234098] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:51.930 [2024-10-17 20:13:37.359382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:51.930 [2024-10-17 20:13:37.361242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:52.189 [2024-10-17 20:13:37.599070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:52.189 [2024-10-17 20:13:37.600394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:52.448 142.00 IOPS, 426.00 MiB/s [2024-10-17T20:13:38.102Z] [2024-10-17 20:13:37.930166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:52.448 [2024-10-17 20:13:37.932057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:52.707 [2024-10-17 20:13:38.179249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:52.707 [2024-10-17 20:13:38.180428] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.707 "name": "raid_bdev1", 00:16:52.707 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:16:52.707 "strip_size_kb": 0, 00:16:52.707 "state": "online", 00:16:52.707 "raid_level": "raid1", 00:16:52.707 "superblock": true, 00:16:52.707 "num_base_bdevs": 4, 00:16:52.707 "num_base_bdevs_discovered": 4, 00:16:52.707 "num_base_bdevs_operational": 4, 00:16:52.707 "process": { 00:16:52.707 "type": "rebuild", 00:16:52.707 "target": "spare", 00:16:52.707 "progress": { 00:16:52.707 "blocks": 10240, 00:16:52.707 "percent": 16 00:16:52.707 } 00:16:52.707 }, 00:16:52.707 "base_bdevs_list": [ 00:16:52.707 { 00:16:52.707 "name": "spare", 00:16:52.707 "uuid": "16092b7e-d1d4-542b-9745-65ae1ea6c0d5", 00:16:52.707 "is_configured": true, 00:16:52.707 "data_offset": 2048, 00:16:52.707 "data_size": 63488 00:16:52.707 }, 00:16:52.707 { 00:16:52.707 "name": "BaseBdev2", 00:16:52.707 "uuid": "aa7c4517-5274-5112-823e-6ffe88f299f2", 00:16:52.707 "is_configured": true, 00:16:52.707 "data_offset": 2048, 00:16:52.707 "data_size": 63488 00:16:52.707 }, 00:16:52.707 { 00:16:52.707 "name": "BaseBdev3", 00:16:52.707 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:16:52.707 "is_configured": true, 00:16:52.707 "data_offset": 2048, 00:16:52.707 "data_size": 63488 00:16:52.707 }, 00:16:52.707 { 00:16:52.707 "name": "BaseBdev4", 00:16:52.707 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:16:52.707 "is_configured": true, 00:16:52.707 "data_offset": 2048, 00:16:52.707 "data_size": 63488 00:16:52.707 } 00:16:52.707 ] 00:16:52.707 }' 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.707 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.966 [2024-10-17 20:13:38.386868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.966 [2024-10-17 20:13:38.517538] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:52.966 [2024-10-17 20:13:38.529846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.966 [2024-10-17 20:13:38.530286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.966 [2024-10-17 20:13:38.530327] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:52.966 [2024-10-17 20:13:38.544681] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.966 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.225 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.225 "name": "raid_bdev1", 00:16:53.225 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:16:53.225 "strip_size_kb": 0, 00:16:53.225 "state": "online", 00:16:53.225 "raid_level": "raid1", 00:16:53.225 "superblock": true, 00:16:53.225 "num_base_bdevs": 4, 00:16:53.225 "num_base_bdevs_discovered": 3, 00:16:53.225 "num_base_bdevs_operational": 3, 00:16:53.225 "base_bdevs_list": [ 00:16:53.225 { 00:16:53.225 "name": null, 00:16:53.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.225 "is_configured": false, 00:16:53.225 "data_offset": 0, 00:16:53.225 "data_size": 63488 00:16:53.225 }, 00:16:53.225 { 00:16:53.225 "name": "BaseBdev2", 00:16:53.225 "uuid": "aa7c4517-5274-5112-823e-6ffe88f299f2", 00:16:53.225 "is_configured": true, 00:16:53.225 "data_offset": 2048, 00:16:53.225 "data_size": 63488 00:16:53.225 }, 00:16:53.225 { 00:16:53.225 "name": "BaseBdev3", 00:16:53.225 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:16:53.225 "is_configured": true, 00:16:53.225 "data_offset": 2048, 00:16:53.225 "data_size": 63488 00:16:53.225 }, 00:16:53.225 { 00:16:53.225 "name": "BaseBdev4", 00:16:53.225 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:16:53.225 "is_configured": true, 00:16:53.225 "data_offset": 2048, 00:16:53.225 "data_size": 63488 00:16:53.225 } 00:16:53.225 ] 00:16:53.225 }' 00:16:53.225 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.225 20:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.483 146.00 IOPS, 438.00 MiB/s [2024-10-17T20:13:39.137Z] 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.483 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.483 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.483 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.483 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.483 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.483 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.483 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.483 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.483 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.742 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.742 "name": "raid_bdev1", 00:16:53.742 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:16:53.742 "strip_size_kb": 0, 00:16:53.742 "state": "online", 00:16:53.742 "raid_level": "raid1", 00:16:53.742 "superblock": true, 00:16:53.742 "num_base_bdevs": 4, 00:16:53.742 "num_base_bdevs_discovered": 3, 00:16:53.742 "num_base_bdevs_operational": 3, 00:16:53.742 "base_bdevs_list": [ 00:16:53.742 { 00:16:53.742 "name": null, 00:16:53.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.742 "is_configured": false, 00:16:53.742 "data_offset": 0, 00:16:53.742 "data_size": 63488 00:16:53.742 }, 00:16:53.742 { 00:16:53.742 "name": "BaseBdev2", 00:16:53.742 "uuid": "aa7c4517-5274-5112-823e-6ffe88f299f2", 00:16:53.742 "is_configured": true, 00:16:53.742 "data_offset": 2048, 00:16:53.742 "data_size": 63488 00:16:53.742 }, 00:16:53.742 { 00:16:53.742 "name": "BaseBdev3", 00:16:53.742 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:16:53.742 "is_configured": true, 00:16:53.742 "data_offset": 2048, 00:16:53.742 "data_size": 63488 00:16:53.742 }, 00:16:53.742 { 00:16:53.742 "name": "BaseBdev4", 00:16:53.742 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:16:53.742 "is_configured": true, 00:16:53.742 "data_offset": 2048, 00:16:53.742 "data_size": 63488 00:16:53.742 } 00:16:53.742 ] 00:16:53.742 }' 00:16:53.742 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.742 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.742 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.742 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.742 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:53.742 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.742 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.742 [2024-10-17 20:13:39.258052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.742 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.742 20:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:53.742 [2024-10-17 20:13:39.325626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:53.742 [2024-10-17 20:13:39.328310] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.001 [2024-10-17 20:13:39.440158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:54.001 [2024-10-17 20:13:39.440828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:54.001 [2024-10-17 20:13:39.651745] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:54.259 [2024-10-17 20:13:39.652803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:54.567 143.33 IOPS, 430.00 MiB/s [2024-10-17T20:13:40.221Z] [2024-10-17 20:13:39.990106] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:54.567 [2024-10-17 20:13:40.132255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.827 "name": "raid_bdev1", 00:16:54.827 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:16:54.827 "strip_size_kb": 0, 00:16:54.827 "state": "online", 00:16:54.827 "raid_level": "raid1", 00:16:54.827 "superblock": true, 00:16:54.827 "num_base_bdevs": 4, 00:16:54.827 "num_base_bdevs_discovered": 4, 00:16:54.827 "num_base_bdevs_operational": 4, 00:16:54.827 "process": { 00:16:54.827 "type": "rebuild", 00:16:54.827 "target": "spare", 00:16:54.827 "progress": { 00:16:54.827 "blocks": 10240, 00:16:54.827 "percent": 16 00:16:54.827 } 00:16:54.827 }, 00:16:54.827 "base_bdevs_list": [ 00:16:54.827 { 00:16:54.827 "name": "spare", 00:16:54.827 "uuid": "16092b7e-d1d4-542b-9745-65ae1ea6c0d5", 00:16:54.827 "is_configured": true, 00:16:54.827 "data_offset": 2048, 00:16:54.827 "data_size": 63488 00:16:54.827 }, 00:16:54.827 { 00:16:54.827 "name": "BaseBdev2", 00:16:54.827 "uuid": "aa7c4517-5274-5112-823e-6ffe88f299f2", 00:16:54.827 "is_configured": true, 00:16:54.827 "data_offset": 2048, 00:16:54.827 "data_size": 63488 00:16:54.827 }, 00:16:54.827 { 00:16:54.827 "name": "BaseBdev3", 00:16:54.827 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:16:54.827 "is_configured": true, 00:16:54.827 "data_offset": 2048, 00:16:54.827 "data_size": 63488 00:16:54.827 }, 00:16:54.827 { 00:16:54.827 "name": "BaseBdev4", 00:16:54.827 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:16:54.827 "is_configured": true, 00:16:54.827 "data_offset": 2048, 00:16:54.827 "data_size": 63488 00:16:54.827 } 00:16:54.827 ] 00:16:54.827 }' 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.827 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.086 [2024-10-17 20:13:40.478384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:55.086 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.086 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:55.086 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:55.086 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:55.086 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:55.086 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:55.086 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:55.086 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:55.086 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.086 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.086 [2024-10-17 20:13:40.489506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:55.345 120.00 IOPS, 360.00 MiB/s [2024-10-17T20:13:40.999Z] [2024-10-17 20:13:40.926331] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:55.345 [2024-10-17 20:13:40.926622] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:55.345 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.345 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:55.345 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:55.345 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.345 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.345 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.345 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.345 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.345 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.345 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.345 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.345 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.345 20:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.604 "name": "raid_bdev1", 00:16:55.604 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:16:55.604 "strip_size_kb": 0, 00:16:55.604 "state": "online", 00:16:55.604 "raid_level": "raid1", 00:16:55.604 "superblock": true, 00:16:55.604 "num_base_bdevs": 4, 00:16:55.604 "num_base_bdevs_discovered": 3, 00:16:55.604 "num_base_bdevs_operational": 3, 00:16:55.604 "process": { 00:16:55.604 "type": "rebuild", 00:16:55.604 "target": "spare", 00:16:55.604 "progress": { 00:16:55.604 "blocks": 16384, 00:16:55.604 "percent": 25 00:16:55.604 } 00:16:55.604 }, 00:16:55.604 "base_bdevs_list": [ 00:16:55.604 { 00:16:55.604 "name": "spare", 00:16:55.604 "uuid": "16092b7e-d1d4-542b-9745-65ae1ea6c0d5", 00:16:55.604 "is_configured": true, 00:16:55.604 "data_offset": 2048, 00:16:55.604 "data_size": 63488 00:16:55.604 }, 00:16:55.604 { 00:16:55.604 "name": null, 00:16:55.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.604 "is_configured": false, 00:16:55.604 "data_offset": 0, 00:16:55.604 "data_size": 63488 00:16:55.604 }, 00:16:55.604 { 00:16:55.604 "name": "BaseBdev3", 00:16:55.604 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:16:55.604 "is_configured": true, 00:16:55.604 "data_offset": 2048, 00:16:55.604 "data_size": 63488 00:16:55.604 }, 00:16:55.604 { 00:16:55.604 "name": "BaseBdev4", 00:16:55.604 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:16:55.604 "is_configured": true, 00:16:55.604 "data_offset": 2048, 00:16:55.604 "data_size": 63488 00:16:55.604 } 00:16:55.604 ] 00:16:55.604 }' 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=536 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.604 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.604 "name": "raid_bdev1", 00:16:55.604 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:16:55.604 "strip_size_kb": 0, 00:16:55.604 "state": "online", 00:16:55.604 "raid_level": "raid1", 00:16:55.604 "superblock": true, 00:16:55.604 "num_base_bdevs": 4, 00:16:55.604 "num_base_bdevs_discovered": 3, 00:16:55.604 "num_base_bdevs_operational": 3, 00:16:55.604 "process": { 00:16:55.604 "type": "rebuild", 00:16:55.604 "target": "spare", 00:16:55.604 "progress": { 00:16:55.604 "blocks": 18432, 00:16:55.604 "percent": 29 00:16:55.604 } 00:16:55.605 }, 00:16:55.605 "base_bdevs_list": [ 00:16:55.605 { 00:16:55.605 "name": "spare", 00:16:55.605 "uuid": "16092b7e-d1d4-542b-9745-65ae1ea6c0d5", 00:16:55.605 "is_configured": true, 00:16:55.605 "data_offset": 2048, 00:16:55.605 "data_size": 63488 00:16:55.605 }, 00:16:55.605 { 00:16:55.605 "name": null, 00:16:55.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.605 "is_configured": false, 00:16:55.605 "data_offset": 0, 00:16:55.605 "data_size": 63488 00:16:55.605 }, 00:16:55.605 { 00:16:55.605 "name": "BaseBdev3", 00:16:55.605 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:16:55.605 "is_configured": true, 00:16:55.605 "data_offset": 2048, 00:16:55.605 "data_size": 63488 00:16:55.605 }, 00:16:55.605 { 00:16:55.605 "name": "BaseBdev4", 00:16:55.605 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:16:55.605 "is_configured": true, 00:16:55.605 "data_offset": 2048, 00:16:55.605 "data_size": 63488 00:16:55.605 } 00:16:55.605 ] 00:16:55.605 }' 00:16:55.605 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.605 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.605 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.863 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.863 20:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.863 [2024-10-17 20:13:41.501622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:55.863 [2024-10-17 20:13:41.502731] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:56.122 107.00 IOPS, 321.00 MiB/s [2024-10-17T20:13:41.776Z] [2024-10-17 20:13:41.757762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:56.703 [2024-10-17 20:13:42.126395] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:56.703 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.703 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.703 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.703 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.703 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.703 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.703 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.703 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.703 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.703 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:56.703 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.703 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.703 "name": "raid_bdev1", 00:16:56.703 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:16:56.703 "strip_size_kb": 0, 00:16:56.703 "state": "online", 00:16:56.703 "raid_level": "raid1", 00:16:56.703 "superblock": true, 00:16:56.703 "num_base_bdevs": 4, 00:16:56.703 "num_base_bdevs_discovered": 3, 00:16:56.703 "num_base_bdevs_operational": 3, 00:16:56.703 "process": { 00:16:56.703 "type": "rebuild", 00:16:56.703 "target": "spare", 00:16:56.703 "progress": { 00:16:56.703 "blocks": 34816, 00:16:56.703 "percent": 54 00:16:56.703 } 00:16:56.703 }, 00:16:56.703 "base_bdevs_list": [ 00:16:56.703 { 00:16:56.703 "name": "spare", 00:16:56.703 "uuid": "16092b7e-d1d4-542b-9745-65ae1ea6c0d5", 00:16:56.703 "is_configured": true, 00:16:56.703 "data_offset": 2048, 00:16:56.703 "data_size": 63488 00:16:56.703 }, 00:16:56.703 { 00:16:56.703 "name": null, 00:16:56.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.703 "is_configured": false, 00:16:56.703 "data_offset": 0, 00:16:56.703 "data_size": 63488 00:16:56.703 }, 00:16:56.703 { 00:16:56.703 "name": "BaseBdev3", 00:16:56.703 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:16:56.703 "is_configured": true, 00:16:56.703 "data_offset": 2048, 00:16:56.703 "data_size": 63488 00:16:56.703 }, 00:16:56.703 { 00:16:56.703 "name": "BaseBdev4", 00:16:56.703 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:16:56.703 "is_configured": true, 00:16:56.703 "data_offset": 2048, 00:16:56.703 "data_size": 63488 00:16:56.703 } 00:16:56.703 ] 00:16:56.703 }' 00:16:56.703 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.962 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.962 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.962 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.962 20:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.962 [2024-10-17 20:13:42.461254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:56.962 [2024-10-17 20:13:42.601933] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:56.962 [2024-10-17 20:13:42.602599] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:57.787 98.83 IOPS, 296.50 MiB/s [2024-10-17T20:13:43.441Z] [2024-10-17 20:13:43.323946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.046 "name": "raid_bdev1", 00:16:58.046 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:16:58.046 "strip_size_kb": 0, 00:16:58.046 "state": "online", 00:16:58.046 "raid_level": "raid1", 00:16:58.046 "superblock": true, 00:16:58.046 "num_base_bdevs": 4, 00:16:58.046 "num_base_bdevs_discovered": 3, 00:16:58.046 "num_base_bdevs_operational": 3, 00:16:58.046 "process": { 00:16:58.046 "type": "rebuild", 00:16:58.046 "target": "spare", 00:16:58.046 "progress": { 00:16:58.046 "blocks": 55296, 00:16:58.046 "percent": 87 00:16:58.046 } 00:16:58.046 }, 00:16:58.046 "base_bdevs_list": [ 00:16:58.046 { 00:16:58.046 "name": "spare", 00:16:58.046 "uuid": "16092b7e-d1d4-542b-9745-65ae1ea6c0d5", 00:16:58.046 "is_configured": true, 00:16:58.046 "data_offset": 2048, 00:16:58.046 "data_size": 63488 00:16:58.046 }, 00:16:58.046 { 00:16:58.046 "name": null, 00:16:58.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.046 "is_configured": false, 00:16:58.046 "data_offset": 0, 00:16:58.046 "data_size": 63488 00:16:58.046 }, 00:16:58.046 { 00:16:58.046 "name": "BaseBdev3", 00:16:58.046 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:16:58.046 "is_configured": true, 00:16:58.046 "data_offset": 2048, 00:16:58.046 "data_size": 63488 00:16:58.046 }, 00:16:58.046 { 00:16:58.046 "name": "BaseBdev4", 00:16:58.046 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:16:58.046 "is_configured": true, 00:16:58.046 "data_offset": 2048, 00:16:58.046 "data_size": 63488 00:16:58.046 } 00:16:58.046 ] 00:16:58.046 }' 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.046 20:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.046 [2024-10-17 20:13:43.647762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:58.046 [2024-10-17 20:13:43.648844] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:58.564 90.43 IOPS, 271.29 MiB/s [2024-10-17T20:13:44.218Z] [2024-10-17 20:13:43.986535] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:58.564 [2024-10-17 20:13:44.092745] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:58.564 [2024-10-17 20:13:44.097044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.131 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.131 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.131 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.131 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.131 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.131 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.131 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.132 "name": "raid_bdev1", 00:16:59.132 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:16:59.132 "strip_size_kb": 0, 00:16:59.132 "state": "online", 00:16:59.132 "raid_level": "raid1", 00:16:59.132 "superblock": true, 00:16:59.132 "num_base_bdevs": 4, 00:16:59.132 "num_base_bdevs_discovered": 3, 00:16:59.132 "num_base_bdevs_operational": 3, 00:16:59.132 "base_bdevs_list": [ 00:16:59.132 { 00:16:59.132 "name": "spare", 00:16:59.132 "uuid": "16092b7e-d1d4-542b-9745-65ae1ea6c0d5", 00:16:59.132 "is_configured": true, 00:16:59.132 "data_offset": 2048, 00:16:59.132 "data_size": 63488 00:16:59.132 }, 00:16:59.132 { 00:16:59.132 "name": null, 00:16:59.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.132 "is_configured": false, 00:16:59.132 "data_offset": 0, 00:16:59.132 "data_size": 63488 00:16:59.132 }, 00:16:59.132 { 00:16:59.132 "name": "BaseBdev3", 00:16:59.132 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:16:59.132 "is_configured": true, 00:16:59.132 "data_offset": 2048, 00:16:59.132 "data_size": 63488 00:16:59.132 }, 00:16:59.132 { 00:16:59.132 "name": "BaseBdev4", 00:16:59.132 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:16:59.132 "is_configured": true, 00:16:59.132 "data_offset": 2048, 00:16:59.132 "data_size": 63488 00:16:59.132 } 00:16:59.132 ] 00:16:59.132 }' 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.132 82.75 IOPS, 248.25 MiB/s [2024-10-17T20:13:44.786Z] 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:59.132 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.391 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.391 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.391 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.391 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.391 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.391 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.391 "name": "raid_bdev1", 00:16:59.391 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:16:59.391 "strip_size_kb": 0, 00:16:59.391 "state": "online", 00:16:59.391 "raid_level": "raid1", 00:16:59.391 "superblock": true, 00:16:59.391 "num_base_bdevs": 4, 00:16:59.392 "num_base_bdevs_discovered": 3, 00:16:59.392 "num_base_bdevs_operational": 3, 00:16:59.392 "base_bdevs_list": [ 00:16:59.392 { 00:16:59.392 "name": "spare", 00:16:59.392 "uuid": "16092b7e-d1d4-542b-9745-65ae1ea6c0d5", 00:16:59.392 "is_configured": true, 00:16:59.392 "data_offset": 2048, 00:16:59.392 "data_size": 63488 00:16:59.392 }, 00:16:59.392 { 00:16:59.392 "name": null, 00:16:59.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.392 "is_configured": false, 00:16:59.392 "data_offset": 0, 00:16:59.392 "data_size": 63488 00:16:59.392 }, 00:16:59.392 { 00:16:59.392 "name": "BaseBdev3", 00:16:59.392 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:16:59.392 "is_configured": true, 00:16:59.392 "data_offset": 2048, 00:16:59.392 "data_size": 63488 00:16:59.392 }, 00:16:59.392 { 00:16:59.392 "name": "BaseBdev4", 00:16:59.392 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:16:59.392 "is_configured": true, 00:16:59.392 "data_offset": 2048, 00:16:59.392 "data_size": 63488 00:16:59.392 } 00:16:59.392 ] 00:16:59.392 }' 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.392 "name": "raid_bdev1", 00:16:59.392 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:16:59.392 "strip_size_kb": 0, 00:16:59.392 "state": "online", 00:16:59.392 "raid_level": "raid1", 00:16:59.392 "superblock": true, 00:16:59.392 "num_base_bdevs": 4, 00:16:59.392 "num_base_bdevs_discovered": 3, 00:16:59.392 "num_base_bdevs_operational": 3, 00:16:59.392 "base_bdevs_list": [ 00:16:59.392 { 00:16:59.392 "name": "spare", 00:16:59.392 "uuid": "16092b7e-d1d4-542b-9745-65ae1ea6c0d5", 00:16:59.392 "is_configured": true, 00:16:59.392 "data_offset": 2048, 00:16:59.392 "data_size": 63488 00:16:59.392 }, 00:16:59.392 { 00:16:59.392 "name": null, 00:16:59.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.392 "is_configured": false, 00:16:59.392 "data_offset": 0, 00:16:59.392 "data_size": 63488 00:16:59.392 }, 00:16:59.392 { 00:16:59.392 "name": "BaseBdev3", 00:16:59.392 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:16:59.392 "is_configured": true, 00:16:59.392 "data_offset": 2048, 00:16:59.392 "data_size": 63488 00:16:59.392 }, 00:16:59.392 { 00:16:59.392 "name": "BaseBdev4", 00:16:59.392 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:16:59.392 "is_configured": true, 00:16:59.392 "data_offset": 2048, 00:16:59.392 "data_size": 63488 00:16:59.392 } 00:16:59.392 ] 00:16:59.392 }' 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.392 20:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.960 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:59.960 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.960 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.960 [2024-10-17 20:13:45.461318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.960 [2024-10-17 20:13:45.461357] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.960 00:16:59.960 Latency(us) 00:16:59.960 [2024-10-17T20:13:45.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.960 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:59.960 raid_bdev1 : 8.81 79.14 237.42 0.00 0.00 17383.49 269.96 120586.24 00:16:59.960 [2024-10-17T20:13:45.614Z] =================================================================================================================== 00:16:59.960 [2024-10-17T20:13:45.614Z] Total : 79.14 237.42 0.00 0.00 17383.49 269.96 120586.24 00:16:59.960 [2024-10-17 20:13:45.562279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.960 [2024-10-17 20:13:45.562498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.960 [2024-10-17 20:13:45.562682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.960 { 00:16:59.960 "results": [ 00:16:59.960 { 00:16:59.960 "job": "raid_bdev1", 00:16:59.960 "core_mask": "0x1", 00:16:59.960 "workload": "randrw", 00:16:59.960 "percentage": 50, 00:16:59.960 "status": "finished", 00:16:59.960 "queue_depth": 2, 00:16:59.960 "io_size": 3145728, 00:16:59.960 "runtime": 8.807097, 00:16:59.960 "iops": 79.14072026230664, 00:16:59.960 "mibps": 237.42216078691993, 00:16:59.960 "io_failed": 0, 00:16:59.960 "io_timeout": 0, 00:16:59.960 "avg_latency_us": 17383.49051258641, 00:16:59.960 "min_latency_us": 269.96363636363634, 00:16:59.960 "max_latency_us": 120586.24 00:16:59.960 } 00:16:59.960 ], 00:16:59.960 "core_count": 1 00:16:59.960 } 00:16:59.960 [2024-10-17 20:13:45.562849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:59.960 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.960 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.960 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:59.960 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.960 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.960 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.219 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:00.219 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:00.219 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:00.219 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:00.219 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:00.219 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:00.219 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:00.219 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:00.219 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:00.219 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:00.219 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:00.219 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:00.219 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:00.478 /dev/nbd0 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:00.478 1+0 records in 00:17:00.478 1+0 records out 00:17:00.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575768 s, 7.1 MB/s 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:00.478 20:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:00.737 /dev/nbd1 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:00.737 1+0 records in 00:17:00.737 1+0 records out 00:17:00.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371156 s, 11.0 MB/s 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:00.737 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:00.995 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:00.995 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:00.995 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:00.995 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:00.995 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:00.995 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.995 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:01.254 20:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:01.513 /dev/nbd1 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.513 1+0 records in 00:17:01.513 1+0 records out 00:17:01.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651073 s, 6.3 MB/s 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:01.513 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:01.786 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:01.786 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:01.786 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:01.786 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:01.786 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:01.786 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:01.786 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.061 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.346 [2024-10-17 20:13:47.809736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:02.346 [2024-10-17 20:13:47.809844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.346 [2024-10-17 20:13:47.809882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:02.346 [2024-10-17 20:13:47.809898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.346 [2024-10-17 20:13:47.813136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.346 [2024-10-17 20:13:47.813181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:02.346 [2024-10-17 20:13:47.813320] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:02.346 [2024-10-17 20:13:47.813390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.346 [2024-10-17 20:13:47.813576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:02.346 [2024-10-17 20:13:47.813722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:02.346 spare 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.346 [2024-10-17 20:13:47.913906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:02.346 [2024-10-17 20:13:47.913949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:02.346 [2024-10-17 20:13:47.914424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:17:02.346 [2024-10-17 20:13:47.914677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:02.346 [2024-10-17 20:13:47.914708] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:02.346 [2024-10-17 20:13:47.914946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.346 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.347 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.347 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.347 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.606 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.606 "name": "raid_bdev1", 00:17:02.606 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:17:02.606 "strip_size_kb": 0, 00:17:02.606 "state": "online", 00:17:02.606 "raid_level": "raid1", 00:17:02.606 "superblock": true, 00:17:02.606 "num_base_bdevs": 4, 00:17:02.606 "num_base_bdevs_discovered": 3, 00:17:02.606 "num_base_bdevs_operational": 3, 00:17:02.606 "base_bdevs_list": [ 00:17:02.606 { 00:17:02.606 "name": "spare", 00:17:02.606 "uuid": "16092b7e-d1d4-542b-9745-65ae1ea6c0d5", 00:17:02.606 "is_configured": true, 00:17:02.606 "data_offset": 2048, 00:17:02.606 "data_size": 63488 00:17:02.606 }, 00:17:02.606 { 00:17:02.606 "name": null, 00:17:02.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.606 "is_configured": false, 00:17:02.606 "data_offset": 2048, 00:17:02.606 "data_size": 63488 00:17:02.606 }, 00:17:02.606 { 00:17:02.606 "name": "BaseBdev3", 00:17:02.606 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:17:02.606 "is_configured": true, 00:17:02.606 "data_offset": 2048, 00:17:02.606 "data_size": 63488 00:17:02.606 }, 00:17:02.606 { 00:17:02.606 "name": "BaseBdev4", 00:17:02.606 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:17:02.606 "is_configured": true, 00:17:02.606 "data_offset": 2048, 00:17:02.606 "data_size": 63488 00:17:02.606 } 00:17:02.606 ] 00:17:02.606 }' 00:17:02.606 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.606 20:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.866 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.866 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.866 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.866 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.866 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.866 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.866 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.866 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.866 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.866 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.866 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.866 "name": "raid_bdev1", 00:17:02.866 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:17:02.866 "strip_size_kb": 0, 00:17:02.866 "state": "online", 00:17:02.866 "raid_level": "raid1", 00:17:02.866 "superblock": true, 00:17:02.866 "num_base_bdevs": 4, 00:17:02.866 "num_base_bdevs_discovered": 3, 00:17:02.866 "num_base_bdevs_operational": 3, 00:17:02.866 "base_bdevs_list": [ 00:17:02.866 { 00:17:02.866 "name": "spare", 00:17:02.866 "uuid": "16092b7e-d1d4-542b-9745-65ae1ea6c0d5", 00:17:02.866 "is_configured": true, 00:17:02.866 "data_offset": 2048, 00:17:02.866 "data_size": 63488 00:17:02.866 }, 00:17:02.866 { 00:17:02.866 "name": null, 00:17:02.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.866 "is_configured": false, 00:17:02.866 "data_offset": 2048, 00:17:02.866 "data_size": 63488 00:17:02.866 }, 00:17:02.866 { 00:17:02.866 "name": "BaseBdev3", 00:17:02.866 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:17:02.866 "is_configured": true, 00:17:02.866 "data_offset": 2048, 00:17:02.866 "data_size": 63488 00:17:02.866 }, 00:17:02.866 { 00:17:02.866 "name": "BaseBdev4", 00:17:02.866 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:17:02.866 "is_configured": true, 00:17:02.866 "data_offset": 2048, 00:17:02.866 "data_size": 63488 00:17:02.866 } 00:17:02.866 ] 00:17:02.866 }' 00:17:02.866 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.125 [2024-10-17 20:13:48.670447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.125 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.125 "name": "raid_bdev1", 00:17:03.125 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:17:03.125 "strip_size_kb": 0, 00:17:03.125 "state": "online", 00:17:03.125 "raid_level": "raid1", 00:17:03.125 "superblock": true, 00:17:03.125 "num_base_bdevs": 4, 00:17:03.125 "num_base_bdevs_discovered": 2, 00:17:03.125 "num_base_bdevs_operational": 2, 00:17:03.125 "base_bdevs_list": [ 00:17:03.125 { 00:17:03.125 "name": null, 00:17:03.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.125 "is_configured": false, 00:17:03.125 "data_offset": 0, 00:17:03.125 "data_size": 63488 00:17:03.125 }, 00:17:03.125 { 00:17:03.125 "name": null, 00:17:03.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.125 "is_configured": false, 00:17:03.125 "data_offset": 2048, 00:17:03.125 "data_size": 63488 00:17:03.125 }, 00:17:03.125 { 00:17:03.125 "name": "BaseBdev3", 00:17:03.126 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:17:03.126 "is_configured": true, 00:17:03.126 "data_offset": 2048, 00:17:03.126 "data_size": 63488 00:17:03.126 }, 00:17:03.126 { 00:17:03.126 "name": "BaseBdev4", 00:17:03.126 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:17:03.126 "is_configured": true, 00:17:03.126 "data_offset": 2048, 00:17:03.126 "data_size": 63488 00:17:03.126 } 00:17:03.126 ] 00:17:03.126 }' 00:17:03.126 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.126 20:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.693 20:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:03.693 20:13:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.693 20:13:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.693 [2024-10-17 20:13:49.214745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.693 [2024-10-17 20:13:49.214981] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:03.693 [2024-10-17 20:13:49.215017] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:03.693 [2024-10-17 20:13:49.215090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.693 [2024-10-17 20:13:49.229335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:17:03.693 20:13:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.693 20:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:03.693 [2024-10-17 20:13:49.231999] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:04.627 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.627 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.627 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.627 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.627 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.627 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.627 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.627 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.627 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.627 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.886 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.886 "name": "raid_bdev1", 00:17:04.886 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:17:04.886 "strip_size_kb": 0, 00:17:04.886 "state": "online", 00:17:04.886 "raid_level": "raid1", 00:17:04.886 "superblock": true, 00:17:04.886 "num_base_bdevs": 4, 00:17:04.886 "num_base_bdevs_discovered": 3, 00:17:04.886 "num_base_bdevs_operational": 3, 00:17:04.886 "process": { 00:17:04.886 "type": "rebuild", 00:17:04.886 "target": "spare", 00:17:04.886 "progress": { 00:17:04.886 "blocks": 20480, 00:17:04.886 "percent": 32 00:17:04.886 } 00:17:04.886 }, 00:17:04.886 "base_bdevs_list": [ 00:17:04.886 { 00:17:04.886 "name": "spare", 00:17:04.886 "uuid": "16092b7e-d1d4-542b-9745-65ae1ea6c0d5", 00:17:04.886 "is_configured": true, 00:17:04.886 "data_offset": 2048, 00:17:04.886 "data_size": 63488 00:17:04.886 }, 00:17:04.886 { 00:17:04.886 "name": null, 00:17:04.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.886 "is_configured": false, 00:17:04.886 "data_offset": 2048, 00:17:04.886 "data_size": 63488 00:17:04.886 }, 00:17:04.886 { 00:17:04.886 "name": "BaseBdev3", 00:17:04.886 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:17:04.886 "is_configured": true, 00:17:04.886 "data_offset": 2048, 00:17:04.886 "data_size": 63488 00:17:04.886 }, 00:17:04.886 { 00:17:04.886 "name": "BaseBdev4", 00:17:04.886 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:17:04.886 "is_configured": true, 00:17:04.886 "data_offset": 2048, 00:17:04.886 "data_size": 63488 00:17:04.886 } 00:17:04.886 ] 00:17:04.886 }' 00:17:04.886 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.887 [2024-10-17 20:13:50.405805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.887 [2024-10-17 20:13:50.441723] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:04.887 [2024-10-17 20:13:50.441810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.887 [2024-10-17 20:13:50.441842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.887 [2024-10-17 20:13:50.441854] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.887 "name": "raid_bdev1", 00:17:04.887 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:17:04.887 "strip_size_kb": 0, 00:17:04.887 "state": "online", 00:17:04.887 "raid_level": "raid1", 00:17:04.887 "superblock": true, 00:17:04.887 "num_base_bdevs": 4, 00:17:04.887 "num_base_bdevs_discovered": 2, 00:17:04.887 "num_base_bdevs_operational": 2, 00:17:04.887 "base_bdevs_list": [ 00:17:04.887 { 00:17:04.887 "name": null, 00:17:04.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.887 "is_configured": false, 00:17:04.887 "data_offset": 0, 00:17:04.887 "data_size": 63488 00:17:04.887 }, 00:17:04.887 { 00:17:04.887 "name": null, 00:17:04.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.887 "is_configured": false, 00:17:04.887 "data_offset": 2048, 00:17:04.887 "data_size": 63488 00:17:04.887 }, 00:17:04.887 { 00:17:04.887 "name": "BaseBdev3", 00:17:04.887 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:17:04.887 "is_configured": true, 00:17:04.887 "data_offset": 2048, 00:17:04.887 "data_size": 63488 00:17:04.887 }, 00:17:04.887 { 00:17:04.887 "name": "BaseBdev4", 00:17:04.887 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:17:04.887 "is_configured": true, 00:17:04.887 "data_offset": 2048, 00:17:04.887 "data_size": 63488 00:17:04.887 } 00:17:04.887 ] 00:17:04.887 }' 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.887 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.453 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:05.453 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.453 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.453 [2024-10-17 20:13:50.984770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:05.453 [2024-10-17 20:13:50.984864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.453 [2024-10-17 20:13:50.984907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:05.453 [2024-10-17 20:13:50.984924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.453 [2024-10-17 20:13:50.985601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.453 [2024-10-17 20:13:50.985634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:05.453 [2024-10-17 20:13:50.985730] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:05.453 [2024-10-17 20:13:50.985748] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:05.453 [2024-10-17 20:13:50.985765] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:05.453 [2024-10-17 20:13:50.985793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.453 [2024-10-17 20:13:51.000164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:17:05.453 spare 00:17:05.453 20:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.453 20:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:05.453 [2024-10-17 20:13:51.002733] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.393 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.393 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.393 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.393 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.393 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.393 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.393 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.393 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.393 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.393 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.651 "name": "raid_bdev1", 00:17:06.651 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:17:06.651 "strip_size_kb": 0, 00:17:06.651 "state": "online", 00:17:06.651 "raid_level": "raid1", 00:17:06.651 "superblock": true, 00:17:06.651 "num_base_bdevs": 4, 00:17:06.651 "num_base_bdevs_discovered": 3, 00:17:06.651 "num_base_bdevs_operational": 3, 00:17:06.651 "process": { 00:17:06.651 "type": "rebuild", 00:17:06.651 "target": "spare", 00:17:06.651 "progress": { 00:17:06.651 "blocks": 20480, 00:17:06.651 "percent": 32 00:17:06.651 } 00:17:06.651 }, 00:17:06.651 "base_bdevs_list": [ 00:17:06.651 { 00:17:06.651 "name": "spare", 00:17:06.651 "uuid": "16092b7e-d1d4-542b-9745-65ae1ea6c0d5", 00:17:06.651 "is_configured": true, 00:17:06.651 "data_offset": 2048, 00:17:06.651 "data_size": 63488 00:17:06.651 }, 00:17:06.651 { 00:17:06.651 "name": null, 00:17:06.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.651 "is_configured": false, 00:17:06.651 "data_offset": 2048, 00:17:06.651 "data_size": 63488 00:17:06.651 }, 00:17:06.651 { 00:17:06.651 "name": "BaseBdev3", 00:17:06.651 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:17:06.651 "is_configured": true, 00:17:06.651 "data_offset": 2048, 00:17:06.651 "data_size": 63488 00:17:06.651 }, 00:17:06.651 { 00:17:06.651 "name": "BaseBdev4", 00:17:06.651 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:17:06.651 "is_configured": true, 00:17:06.651 "data_offset": 2048, 00:17:06.651 "data_size": 63488 00:17:06.651 } 00:17:06.651 ] 00:17:06.651 }' 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.651 [2024-10-17 20:13:52.164194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.651 [2024-10-17 20:13:52.212076] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:06.651 [2024-10-17 20:13:52.212376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.651 [2024-10-17 20:13:52.212406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.651 [2024-10-17 20:13:52.212428] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.651 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.651 "name": "raid_bdev1", 00:17:06.651 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:17:06.651 "strip_size_kb": 0, 00:17:06.651 "state": "online", 00:17:06.651 "raid_level": "raid1", 00:17:06.651 "superblock": true, 00:17:06.651 "num_base_bdevs": 4, 00:17:06.651 "num_base_bdevs_discovered": 2, 00:17:06.651 "num_base_bdevs_operational": 2, 00:17:06.651 "base_bdevs_list": [ 00:17:06.651 { 00:17:06.651 "name": null, 00:17:06.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.651 "is_configured": false, 00:17:06.651 "data_offset": 0, 00:17:06.651 "data_size": 63488 00:17:06.651 }, 00:17:06.651 { 00:17:06.651 "name": null, 00:17:06.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.651 "is_configured": false, 00:17:06.651 "data_offset": 2048, 00:17:06.651 "data_size": 63488 00:17:06.651 }, 00:17:06.651 { 00:17:06.651 "name": "BaseBdev3", 00:17:06.651 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:17:06.651 "is_configured": true, 00:17:06.651 "data_offset": 2048, 00:17:06.651 "data_size": 63488 00:17:06.651 }, 00:17:06.651 { 00:17:06.651 "name": "BaseBdev4", 00:17:06.651 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:17:06.652 "is_configured": true, 00:17:06.652 "data_offset": 2048, 00:17:06.652 "data_size": 63488 00:17:06.652 } 00:17:06.652 ] 00:17:06.652 }' 00:17:06.652 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.652 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.218 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:07.218 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.218 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:07.218 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:07.218 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.218 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.218 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.218 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.218 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.218 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.218 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.218 "name": "raid_bdev1", 00:17:07.218 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:17:07.218 "strip_size_kb": 0, 00:17:07.218 "state": "online", 00:17:07.218 "raid_level": "raid1", 00:17:07.218 "superblock": true, 00:17:07.218 "num_base_bdevs": 4, 00:17:07.218 "num_base_bdevs_discovered": 2, 00:17:07.218 "num_base_bdevs_operational": 2, 00:17:07.218 "base_bdevs_list": [ 00:17:07.218 { 00:17:07.218 "name": null, 00:17:07.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.218 "is_configured": false, 00:17:07.218 "data_offset": 0, 00:17:07.218 "data_size": 63488 00:17:07.218 }, 00:17:07.218 { 00:17:07.218 "name": null, 00:17:07.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.219 "is_configured": false, 00:17:07.219 "data_offset": 2048, 00:17:07.219 "data_size": 63488 00:17:07.219 }, 00:17:07.219 { 00:17:07.219 "name": "BaseBdev3", 00:17:07.219 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:17:07.219 "is_configured": true, 00:17:07.219 "data_offset": 2048, 00:17:07.219 "data_size": 63488 00:17:07.219 }, 00:17:07.219 { 00:17:07.219 "name": "BaseBdev4", 00:17:07.219 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:17:07.219 "is_configured": true, 00:17:07.219 "data_offset": 2048, 00:17:07.219 "data_size": 63488 00:17:07.219 } 00:17:07.219 ] 00:17:07.219 }' 00:17:07.219 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.219 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:07.219 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.477 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:07.477 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:07.477 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.477 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.477 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.477 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:07.477 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.477 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.477 [2024-10-17 20:13:52.914242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:07.477 [2024-10-17 20:13:52.914316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.477 [2024-10-17 20:13:52.914346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:17:07.477 [2024-10-17 20:13:52.914363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.477 [2024-10-17 20:13:52.914927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.477 [2024-10-17 20:13:52.914975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:07.477 [2024-10-17 20:13:52.915092] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:07.477 [2024-10-17 20:13:52.915122] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:07.477 [2024-10-17 20:13:52.915134] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:07.477 [2024-10-17 20:13:52.915150] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:07.477 BaseBdev1 00:17:07.477 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.477 20:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.413 "name": "raid_bdev1", 00:17:08.413 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:17:08.413 "strip_size_kb": 0, 00:17:08.413 "state": "online", 00:17:08.413 "raid_level": "raid1", 00:17:08.413 "superblock": true, 00:17:08.413 "num_base_bdevs": 4, 00:17:08.413 "num_base_bdevs_discovered": 2, 00:17:08.413 "num_base_bdevs_operational": 2, 00:17:08.413 "base_bdevs_list": [ 00:17:08.413 { 00:17:08.413 "name": null, 00:17:08.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.413 "is_configured": false, 00:17:08.413 "data_offset": 0, 00:17:08.413 "data_size": 63488 00:17:08.413 }, 00:17:08.413 { 00:17:08.413 "name": null, 00:17:08.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.413 "is_configured": false, 00:17:08.413 "data_offset": 2048, 00:17:08.413 "data_size": 63488 00:17:08.413 }, 00:17:08.413 { 00:17:08.413 "name": "BaseBdev3", 00:17:08.413 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:17:08.413 "is_configured": true, 00:17:08.413 "data_offset": 2048, 00:17:08.413 "data_size": 63488 00:17:08.413 }, 00:17:08.413 { 00:17:08.413 "name": "BaseBdev4", 00:17:08.413 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:17:08.413 "is_configured": true, 00:17:08.413 "data_offset": 2048, 00:17:08.413 "data_size": 63488 00:17:08.413 } 00:17:08.413 ] 00:17:08.413 }' 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.413 20:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.980 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.980 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.980 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.980 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.980 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.980 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.980 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.980 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.980 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.980 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.980 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.981 "name": "raid_bdev1", 00:17:08.981 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:17:08.981 "strip_size_kb": 0, 00:17:08.981 "state": "online", 00:17:08.981 "raid_level": "raid1", 00:17:08.981 "superblock": true, 00:17:08.981 "num_base_bdevs": 4, 00:17:08.981 "num_base_bdevs_discovered": 2, 00:17:08.981 "num_base_bdevs_operational": 2, 00:17:08.981 "base_bdevs_list": [ 00:17:08.981 { 00:17:08.981 "name": null, 00:17:08.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.981 "is_configured": false, 00:17:08.981 "data_offset": 0, 00:17:08.981 "data_size": 63488 00:17:08.981 }, 00:17:08.981 { 00:17:08.981 "name": null, 00:17:08.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.981 "is_configured": false, 00:17:08.981 "data_offset": 2048, 00:17:08.981 "data_size": 63488 00:17:08.981 }, 00:17:08.981 { 00:17:08.981 "name": "BaseBdev3", 00:17:08.981 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:17:08.981 "is_configured": true, 00:17:08.981 "data_offset": 2048, 00:17:08.981 "data_size": 63488 00:17:08.981 }, 00:17:08.981 { 00:17:08.981 "name": "BaseBdev4", 00:17:08.981 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:17:08.981 "is_configured": true, 00:17:08.981 "data_offset": 2048, 00:17:08.981 "data_size": 63488 00:17:08.981 } 00:17:08.981 ] 00:17:08.981 }' 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.981 [2024-10-17 20:13:54.607319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.981 [2024-10-17 20:13:54.607581] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:08.981 [2024-10-17 20:13:54.607603] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:08.981 request: 00:17:08.981 { 00:17:08.981 "base_bdev": "BaseBdev1", 00:17:08.981 "raid_bdev": "raid_bdev1", 00:17:08.981 "method": "bdev_raid_add_base_bdev", 00:17:08.981 "req_id": 1 00:17:08.981 } 00:17:08.981 Got JSON-RPC error response 00:17:08.981 response: 00:17:08.981 { 00:17:08.981 "code": -22, 00:17:08.981 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:08.981 } 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:08.981 20:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.355 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.355 "name": "raid_bdev1", 00:17:10.355 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:17:10.355 "strip_size_kb": 0, 00:17:10.355 "state": "online", 00:17:10.355 "raid_level": "raid1", 00:17:10.355 "superblock": true, 00:17:10.355 "num_base_bdevs": 4, 00:17:10.355 "num_base_bdevs_discovered": 2, 00:17:10.355 "num_base_bdevs_operational": 2, 00:17:10.355 "base_bdevs_list": [ 00:17:10.355 { 00:17:10.355 "name": null, 00:17:10.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.355 "is_configured": false, 00:17:10.355 "data_offset": 0, 00:17:10.355 "data_size": 63488 00:17:10.355 }, 00:17:10.355 { 00:17:10.355 "name": null, 00:17:10.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.355 "is_configured": false, 00:17:10.355 "data_offset": 2048, 00:17:10.355 "data_size": 63488 00:17:10.355 }, 00:17:10.355 { 00:17:10.355 "name": "BaseBdev3", 00:17:10.355 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:17:10.355 "is_configured": true, 00:17:10.355 "data_offset": 2048, 00:17:10.355 "data_size": 63488 00:17:10.355 }, 00:17:10.355 { 00:17:10.355 "name": "BaseBdev4", 00:17:10.355 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:17:10.355 "is_configured": true, 00:17:10.355 "data_offset": 2048, 00:17:10.355 "data_size": 63488 00:17:10.355 } 00:17:10.355 ] 00:17:10.355 }' 00:17:10.356 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.356 20:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.614 "name": "raid_bdev1", 00:17:10.614 "uuid": "236e9481-0157-45cc-874b-bdd9199ed1f2", 00:17:10.614 "strip_size_kb": 0, 00:17:10.614 "state": "online", 00:17:10.614 "raid_level": "raid1", 00:17:10.614 "superblock": true, 00:17:10.614 "num_base_bdevs": 4, 00:17:10.614 "num_base_bdevs_discovered": 2, 00:17:10.614 "num_base_bdevs_operational": 2, 00:17:10.614 "base_bdevs_list": [ 00:17:10.614 { 00:17:10.614 "name": null, 00:17:10.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.614 "is_configured": false, 00:17:10.614 "data_offset": 0, 00:17:10.614 "data_size": 63488 00:17:10.614 }, 00:17:10.614 { 00:17:10.614 "name": null, 00:17:10.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.614 "is_configured": false, 00:17:10.614 "data_offset": 2048, 00:17:10.614 "data_size": 63488 00:17:10.614 }, 00:17:10.614 { 00:17:10.614 "name": "BaseBdev3", 00:17:10.614 "uuid": "c89bb8a6-230e-534a-a195-c25ebb492b71", 00:17:10.614 "is_configured": true, 00:17:10.614 "data_offset": 2048, 00:17:10.614 "data_size": 63488 00:17:10.614 }, 00:17:10.614 { 00:17:10.614 "name": "BaseBdev4", 00:17:10.614 "uuid": "36dd254d-2318-5ca0-bb8d-3ef3864ac2ba", 00:17:10.614 "is_configured": true, 00:17:10.614 "data_offset": 2048, 00:17:10.614 "data_size": 63488 00:17:10.614 } 00:17:10.614 ] 00:17:10.614 }' 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.614 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.872 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.873 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79383 00:17:10.873 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 79383 ']' 00:17:10.873 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 79383 00:17:10.873 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:17:10.873 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:10.873 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79383 00:17:10.873 killing process with pid 79383 00:17:10.873 Received shutdown signal, test time was about 19.588410 seconds 00:17:10.873 00:17:10.873 Latency(us) 00:17:10.873 [2024-10-17T20:13:56.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.873 [2024-10-17T20:13:56.527Z] =================================================================================================================== 00:17:10.873 [2024-10-17T20:13:56.527Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.873 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:10.873 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:10.873 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79383' 00:17:10.873 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 79383 00:17:10.873 [2024-10-17 20:13:56.322531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.873 20:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 79383 00:17:10.873 [2024-10-17 20:13:56.322674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.873 [2024-10-17 20:13:56.322763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.873 [2024-10-17 20:13:56.322778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:11.131 [2024-10-17 20:13:56.652685] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:12.066 20:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:12.066 ************************************ 00:17:12.067 END TEST raid_rebuild_test_sb_io 00:17:12.067 ************************************ 00:17:12.067 00:17:12.067 real 0m23.130s 00:17:12.067 user 0m31.647s 00:17:12.067 sys 0m2.374s 00:17:12.067 20:13:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:12.067 20:13:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.067 20:13:57 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:12.067 20:13:57 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:17:12.067 20:13:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:12.067 20:13:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:12.067 20:13:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.067 ************************************ 00:17:12.067 START TEST raid5f_state_function_test 00:17:12.067 ************************************ 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80117 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80117' 00:17:12.067 Process raid pid: 80117 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80117 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80117 ']' 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.067 20:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.325 [2024-10-17 20:13:57.803528] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:17:12.325 [2024-10-17 20:13:57.804053] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.583 [2024-10-17 20:13:57.980112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.583 [2024-10-17 20:13:58.104865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.842 [2024-10-17 20:13:58.298683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.842 [2024-10-17 20:13:58.298744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.409 [2024-10-17 20:13:58.760336] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.409 [2024-10-17 20:13:58.760420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.409 [2024-10-17 20:13:58.760452] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.409 [2024-10-17 20:13:58.760510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.409 [2024-10-17 20:13:58.760519] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:13.409 [2024-10-17 20:13:58.760546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.409 "name": "Existed_Raid", 00:17:13.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.409 "strip_size_kb": 64, 00:17:13.409 "state": "configuring", 00:17:13.409 "raid_level": "raid5f", 00:17:13.409 "superblock": false, 00:17:13.409 "num_base_bdevs": 3, 00:17:13.409 "num_base_bdevs_discovered": 0, 00:17:13.409 "num_base_bdevs_operational": 3, 00:17:13.409 "base_bdevs_list": [ 00:17:13.409 { 00:17:13.409 "name": "BaseBdev1", 00:17:13.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.409 "is_configured": false, 00:17:13.409 "data_offset": 0, 00:17:13.409 "data_size": 0 00:17:13.409 }, 00:17:13.409 { 00:17:13.409 "name": "BaseBdev2", 00:17:13.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.409 "is_configured": false, 00:17:13.409 "data_offset": 0, 00:17:13.409 "data_size": 0 00:17:13.409 }, 00:17:13.409 { 00:17:13.409 "name": "BaseBdev3", 00:17:13.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.409 "is_configured": false, 00:17:13.409 "data_offset": 0, 00:17:13.409 "data_size": 0 00:17:13.409 } 00:17:13.409 ] 00:17:13.409 }' 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.409 20:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.668 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:13.668 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.668 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.668 [2024-10-17 20:13:59.256421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.668 [2024-10-17 20:13:59.256480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:13.668 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.668 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:13.668 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.668 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.668 [2024-10-17 20:13:59.268430] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.668 [2024-10-17 20:13:59.268497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.668 [2024-10-17 20:13:59.268513] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.668 [2024-10-17 20:13:59.268529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.668 [2024-10-17 20:13:59.268539] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:13.668 [2024-10-17 20:13:59.268553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:13.668 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.668 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:13.668 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.668 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.668 [2024-10-17 20:13:59.317622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.668 BaseBdev1 00:17:13.668 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.928 [ 00:17:13.928 { 00:17:13.928 "name": "BaseBdev1", 00:17:13.928 "aliases": [ 00:17:13.928 "4e2bad98-6cb6-4ebf-a54e-d6a19f218ba6" 00:17:13.928 ], 00:17:13.928 "product_name": "Malloc disk", 00:17:13.928 "block_size": 512, 00:17:13.928 "num_blocks": 65536, 00:17:13.928 "uuid": "4e2bad98-6cb6-4ebf-a54e-d6a19f218ba6", 00:17:13.928 "assigned_rate_limits": { 00:17:13.928 "rw_ios_per_sec": 0, 00:17:13.928 "rw_mbytes_per_sec": 0, 00:17:13.928 "r_mbytes_per_sec": 0, 00:17:13.928 "w_mbytes_per_sec": 0 00:17:13.928 }, 00:17:13.928 "claimed": true, 00:17:13.928 "claim_type": "exclusive_write", 00:17:13.928 "zoned": false, 00:17:13.928 "supported_io_types": { 00:17:13.928 "read": true, 00:17:13.928 "write": true, 00:17:13.928 "unmap": true, 00:17:13.928 "flush": true, 00:17:13.928 "reset": true, 00:17:13.928 "nvme_admin": false, 00:17:13.928 "nvme_io": false, 00:17:13.928 "nvme_io_md": false, 00:17:13.928 "write_zeroes": true, 00:17:13.928 "zcopy": true, 00:17:13.928 "get_zone_info": false, 00:17:13.928 "zone_management": false, 00:17:13.928 "zone_append": false, 00:17:13.928 "compare": false, 00:17:13.928 "compare_and_write": false, 00:17:13.928 "abort": true, 00:17:13.928 "seek_hole": false, 00:17:13.928 "seek_data": false, 00:17:13.928 "copy": true, 00:17:13.928 "nvme_iov_md": false 00:17:13.928 }, 00:17:13.928 "memory_domains": [ 00:17:13.928 { 00:17:13.928 "dma_device_id": "system", 00:17:13.928 "dma_device_type": 1 00:17:13.928 }, 00:17:13.928 { 00:17:13.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.928 "dma_device_type": 2 00:17:13.928 } 00:17:13.928 ], 00:17:13.928 "driver_specific": {} 00:17:13.928 } 00:17:13.928 ] 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.928 "name": "Existed_Raid", 00:17:13.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.928 "strip_size_kb": 64, 00:17:13.928 "state": "configuring", 00:17:13.928 "raid_level": "raid5f", 00:17:13.928 "superblock": false, 00:17:13.928 "num_base_bdevs": 3, 00:17:13.928 "num_base_bdevs_discovered": 1, 00:17:13.928 "num_base_bdevs_operational": 3, 00:17:13.928 "base_bdevs_list": [ 00:17:13.928 { 00:17:13.928 "name": "BaseBdev1", 00:17:13.928 "uuid": "4e2bad98-6cb6-4ebf-a54e-d6a19f218ba6", 00:17:13.928 "is_configured": true, 00:17:13.928 "data_offset": 0, 00:17:13.928 "data_size": 65536 00:17:13.928 }, 00:17:13.928 { 00:17:13.928 "name": "BaseBdev2", 00:17:13.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.928 "is_configured": false, 00:17:13.928 "data_offset": 0, 00:17:13.928 "data_size": 0 00:17:13.928 }, 00:17:13.928 { 00:17:13.928 "name": "BaseBdev3", 00:17:13.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.928 "is_configured": false, 00:17:13.928 "data_offset": 0, 00:17:13.928 "data_size": 0 00:17:13.928 } 00:17:13.928 ] 00:17:13.928 }' 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.928 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.495 [2024-10-17 20:13:59.877883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:14.495 [2024-10-17 20:13:59.877943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.495 [2024-10-17 20:13:59.885916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.495 [2024-10-17 20:13:59.888479] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:14.495 [2024-10-17 20:13:59.888534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:14.495 [2024-10-17 20:13:59.888550] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:14.495 [2024-10-17 20:13:59.888594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.495 "name": "Existed_Raid", 00:17:14.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.495 "strip_size_kb": 64, 00:17:14.495 "state": "configuring", 00:17:14.495 "raid_level": "raid5f", 00:17:14.495 "superblock": false, 00:17:14.495 "num_base_bdevs": 3, 00:17:14.495 "num_base_bdevs_discovered": 1, 00:17:14.495 "num_base_bdevs_operational": 3, 00:17:14.495 "base_bdevs_list": [ 00:17:14.495 { 00:17:14.495 "name": "BaseBdev1", 00:17:14.495 "uuid": "4e2bad98-6cb6-4ebf-a54e-d6a19f218ba6", 00:17:14.495 "is_configured": true, 00:17:14.495 "data_offset": 0, 00:17:14.495 "data_size": 65536 00:17:14.495 }, 00:17:14.495 { 00:17:14.495 "name": "BaseBdev2", 00:17:14.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.495 "is_configured": false, 00:17:14.495 "data_offset": 0, 00:17:14.495 "data_size": 0 00:17:14.495 }, 00:17:14.495 { 00:17:14.495 "name": "BaseBdev3", 00:17:14.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.495 "is_configured": false, 00:17:14.495 "data_offset": 0, 00:17:14.495 "data_size": 0 00:17:14.495 } 00:17:14.495 ] 00:17:14.495 }' 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.495 20:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.063 [2024-10-17 20:14:00.454046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.063 BaseBdev2 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.063 [ 00:17:15.063 { 00:17:15.063 "name": "BaseBdev2", 00:17:15.063 "aliases": [ 00:17:15.063 "7b4292ab-67c3-4039-a70b-3d729323d781" 00:17:15.063 ], 00:17:15.063 "product_name": "Malloc disk", 00:17:15.063 "block_size": 512, 00:17:15.063 "num_blocks": 65536, 00:17:15.063 "uuid": "7b4292ab-67c3-4039-a70b-3d729323d781", 00:17:15.063 "assigned_rate_limits": { 00:17:15.063 "rw_ios_per_sec": 0, 00:17:15.063 "rw_mbytes_per_sec": 0, 00:17:15.063 "r_mbytes_per_sec": 0, 00:17:15.063 "w_mbytes_per_sec": 0 00:17:15.063 }, 00:17:15.063 "claimed": true, 00:17:15.063 "claim_type": "exclusive_write", 00:17:15.063 "zoned": false, 00:17:15.063 "supported_io_types": { 00:17:15.063 "read": true, 00:17:15.063 "write": true, 00:17:15.063 "unmap": true, 00:17:15.063 "flush": true, 00:17:15.063 "reset": true, 00:17:15.063 "nvme_admin": false, 00:17:15.063 "nvme_io": false, 00:17:15.063 "nvme_io_md": false, 00:17:15.063 "write_zeroes": true, 00:17:15.063 "zcopy": true, 00:17:15.063 "get_zone_info": false, 00:17:15.063 "zone_management": false, 00:17:15.063 "zone_append": false, 00:17:15.063 "compare": false, 00:17:15.063 "compare_and_write": false, 00:17:15.063 "abort": true, 00:17:15.063 "seek_hole": false, 00:17:15.063 "seek_data": false, 00:17:15.063 "copy": true, 00:17:15.063 "nvme_iov_md": false 00:17:15.063 }, 00:17:15.063 "memory_domains": [ 00:17:15.063 { 00:17:15.063 "dma_device_id": "system", 00:17:15.063 "dma_device_type": 1 00:17:15.063 }, 00:17:15.063 { 00:17:15.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.063 "dma_device_type": 2 00:17:15.063 } 00:17:15.063 ], 00:17:15.063 "driver_specific": {} 00:17:15.063 } 00:17:15.063 ] 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.063 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.063 "name": "Existed_Raid", 00:17:15.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.063 "strip_size_kb": 64, 00:17:15.063 "state": "configuring", 00:17:15.063 "raid_level": "raid5f", 00:17:15.063 "superblock": false, 00:17:15.063 "num_base_bdevs": 3, 00:17:15.063 "num_base_bdevs_discovered": 2, 00:17:15.063 "num_base_bdevs_operational": 3, 00:17:15.063 "base_bdevs_list": [ 00:17:15.063 { 00:17:15.063 "name": "BaseBdev1", 00:17:15.063 "uuid": "4e2bad98-6cb6-4ebf-a54e-d6a19f218ba6", 00:17:15.063 "is_configured": true, 00:17:15.063 "data_offset": 0, 00:17:15.063 "data_size": 65536 00:17:15.063 }, 00:17:15.063 { 00:17:15.063 "name": "BaseBdev2", 00:17:15.063 "uuid": "7b4292ab-67c3-4039-a70b-3d729323d781", 00:17:15.063 "is_configured": true, 00:17:15.063 "data_offset": 0, 00:17:15.063 "data_size": 65536 00:17:15.063 }, 00:17:15.063 { 00:17:15.064 "name": "BaseBdev3", 00:17:15.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.064 "is_configured": false, 00:17:15.064 "data_offset": 0, 00:17:15.064 "data_size": 0 00:17:15.064 } 00:17:15.064 ] 00:17:15.064 }' 00:17:15.064 20:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.064 20:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.630 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:15.630 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.631 [2024-10-17 20:14:01.088751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:15.631 [2024-10-17 20:14:01.089026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:15.631 [2024-10-17 20:14:01.089059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:15.631 [2024-10-17 20:14:01.089403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:15.631 [2024-10-17 20:14:01.095071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:15.631 [2024-10-17 20:14:01.095111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:15.631 [2024-10-17 20:14:01.095498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.631 BaseBdev3 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.631 [ 00:17:15.631 { 00:17:15.631 "name": "BaseBdev3", 00:17:15.631 "aliases": [ 00:17:15.631 "dcdf90df-080d-40b1-a491-487dc80e4de6" 00:17:15.631 ], 00:17:15.631 "product_name": "Malloc disk", 00:17:15.631 "block_size": 512, 00:17:15.631 "num_blocks": 65536, 00:17:15.631 "uuid": "dcdf90df-080d-40b1-a491-487dc80e4de6", 00:17:15.631 "assigned_rate_limits": { 00:17:15.631 "rw_ios_per_sec": 0, 00:17:15.631 "rw_mbytes_per_sec": 0, 00:17:15.631 "r_mbytes_per_sec": 0, 00:17:15.631 "w_mbytes_per_sec": 0 00:17:15.631 }, 00:17:15.631 "claimed": true, 00:17:15.631 "claim_type": "exclusive_write", 00:17:15.631 "zoned": false, 00:17:15.631 "supported_io_types": { 00:17:15.631 "read": true, 00:17:15.631 "write": true, 00:17:15.631 "unmap": true, 00:17:15.631 "flush": true, 00:17:15.631 "reset": true, 00:17:15.631 "nvme_admin": false, 00:17:15.631 "nvme_io": false, 00:17:15.631 "nvme_io_md": false, 00:17:15.631 "write_zeroes": true, 00:17:15.631 "zcopy": true, 00:17:15.631 "get_zone_info": false, 00:17:15.631 "zone_management": false, 00:17:15.631 "zone_append": false, 00:17:15.631 "compare": false, 00:17:15.631 "compare_and_write": false, 00:17:15.631 "abort": true, 00:17:15.631 "seek_hole": false, 00:17:15.631 "seek_data": false, 00:17:15.631 "copy": true, 00:17:15.631 "nvme_iov_md": false 00:17:15.631 }, 00:17:15.631 "memory_domains": [ 00:17:15.631 { 00:17:15.631 "dma_device_id": "system", 00:17:15.631 "dma_device_type": 1 00:17:15.631 }, 00:17:15.631 { 00:17:15.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.631 "dma_device_type": 2 00:17:15.631 } 00:17:15.631 ], 00:17:15.631 "driver_specific": {} 00:17:15.631 } 00:17:15.631 ] 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.631 "name": "Existed_Raid", 00:17:15.631 "uuid": "359164de-5e41-4007-b0dc-04fc3de60b47", 00:17:15.631 "strip_size_kb": 64, 00:17:15.631 "state": "online", 00:17:15.631 "raid_level": "raid5f", 00:17:15.631 "superblock": false, 00:17:15.631 "num_base_bdevs": 3, 00:17:15.631 "num_base_bdevs_discovered": 3, 00:17:15.631 "num_base_bdevs_operational": 3, 00:17:15.631 "base_bdevs_list": [ 00:17:15.631 { 00:17:15.631 "name": "BaseBdev1", 00:17:15.631 "uuid": "4e2bad98-6cb6-4ebf-a54e-d6a19f218ba6", 00:17:15.631 "is_configured": true, 00:17:15.631 "data_offset": 0, 00:17:15.631 "data_size": 65536 00:17:15.631 }, 00:17:15.631 { 00:17:15.631 "name": "BaseBdev2", 00:17:15.631 "uuid": "7b4292ab-67c3-4039-a70b-3d729323d781", 00:17:15.631 "is_configured": true, 00:17:15.631 "data_offset": 0, 00:17:15.631 "data_size": 65536 00:17:15.631 }, 00:17:15.631 { 00:17:15.631 "name": "BaseBdev3", 00:17:15.631 "uuid": "dcdf90df-080d-40b1-a491-487dc80e4de6", 00:17:15.631 "is_configured": true, 00:17:15.631 "data_offset": 0, 00:17:15.631 "data_size": 65536 00:17:15.631 } 00:17:15.631 ] 00:17:15.631 }' 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.631 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:16.198 [2024-10-17 20:14:01.685620] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:16.198 "name": "Existed_Raid", 00:17:16.198 "aliases": [ 00:17:16.198 "359164de-5e41-4007-b0dc-04fc3de60b47" 00:17:16.198 ], 00:17:16.198 "product_name": "Raid Volume", 00:17:16.198 "block_size": 512, 00:17:16.198 "num_blocks": 131072, 00:17:16.198 "uuid": "359164de-5e41-4007-b0dc-04fc3de60b47", 00:17:16.198 "assigned_rate_limits": { 00:17:16.198 "rw_ios_per_sec": 0, 00:17:16.198 "rw_mbytes_per_sec": 0, 00:17:16.198 "r_mbytes_per_sec": 0, 00:17:16.198 "w_mbytes_per_sec": 0 00:17:16.198 }, 00:17:16.198 "claimed": false, 00:17:16.198 "zoned": false, 00:17:16.198 "supported_io_types": { 00:17:16.198 "read": true, 00:17:16.198 "write": true, 00:17:16.198 "unmap": false, 00:17:16.198 "flush": false, 00:17:16.198 "reset": true, 00:17:16.198 "nvme_admin": false, 00:17:16.198 "nvme_io": false, 00:17:16.198 "nvme_io_md": false, 00:17:16.198 "write_zeroes": true, 00:17:16.198 "zcopy": false, 00:17:16.198 "get_zone_info": false, 00:17:16.198 "zone_management": false, 00:17:16.198 "zone_append": false, 00:17:16.198 "compare": false, 00:17:16.198 "compare_and_write": false, 00:17:16.198 "abort": false, 00:17:16.198 "seek_hole": false, 00:17:16.198 "seek_data": false, 00:17:16.198 "copy": false, 00:17:16.198 "nvme_iov_md": false 00:17:16.198 }, 00:17:16.198 "driver_specific": { 00:17:16.198 "raid": { 00:17:16.198 "uuid": "359164de-5e41-4007-b0dc-04fc3de60b47", 00:17:16.198 "strip_size_kb": 64, 00:17:16.198 "state": "online", 00:17:16.198 "raid_level": "raid5f", 00:17:16.198 "superblock": false, 00:17:16.198 "num_base_bdevs": 3, 00:17:16.198 "num_base_bdevs_discovered": 3, 00:17:16.198 "num_base_bdevs_operational": 3, 00:17:16.198 "base_bdevs_list": [ 00:17:16.198 { 00:17:16.198 "name": "BaseBdev1", 00:17:16.198 "uuid": "4e2bad98-6cb6-4ebf-a54e-d6a19f218ba6", 00:17:16.198 "is_configured": true, 00:17:16.198 "data_offset": 0, 00:17:16.198 "data_size": 65536 00:17:16.198 }, 00:17:16.198 { 00:17:16.198 "name": "BaseBdev2", 00:17:16.198 "uuid": "7b4292ab-67c3-4039-a70b-3d729323d781", 00:17:16.198 "is_configured": true, 00:17:16.198 "data_offset": 0, 00:17:16.198 "data_size": 65536 00:17:16.198 }, 00:17:16.198 { 00:17:16.198 "name": "BaseBdev3", 00:17:16.198 "uuid": "dcdf90df-080d-40b1-a491-487dc80e4de6", 00:17:16.198 "is_configured": true, 00:17:16.198 "data_offset": 0, 00:17:16.198 "data_size": 65536 00:17:16.198 } 00:17:16.198 ] 00:17:16.198 } 00:17:16.198 } 00:17:16.198 }' 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:16.198 BaseBdev2 00:17:16.198 BaseBdev3' 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.198 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.457 20:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.457 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.457 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.457 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:16.457 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.457 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.457 [2024-10-17 20:14:02.029545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.715 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.715 "name": "Existed_Raid", 00:17:16.715 "uuid": "359164de-5e41-4007-b0dc-04fc3de60b47", 00:17:16.715 "strip_size_kb": 64, 00:17:16.715 "state": "online", 00:17:16.715 "raid_level": "raid5f", 00:17:16.715 "superblock": false, 00:17:16.715 "num_base_bdevs": 3, 00:17:16.715 "num_base_bdevs_discovered": 2, 00:17:16.715 "num_base_bdevs_operational": 2, 00:17:16.715 "base_bdevs_list": [ 00:17:16.715 { 00:17:16.715 "name": null, 00:17:16.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.715 "is_configured": false, 00:17:16.715 "data_offset": 0, 00:17:16.715 "data_size": 65536 00:17:16.715 }, 00:17:16.715 { 00:17:16.715 "name": "BaseBdev2", 00:17:16.715 "uuid": "7b4292ab-67c3-4039-a70b-3d729323d781", 00:17:16.715 "is_configured": true, 00:17:16.716 "data_offset": 0, 00:17:16.716 "data_size": 65536 00:17:16.716 }, 00:17:16.716 { 00:17:16.716 "name": "BaseBdev3", 00:17:16.716 "uuid": "dcdf90df-080d-40b1-a491-487dc80e4de6", 00:17:16.716 "is_configured": true, 00:17:16.716 "data_offset": 0, 00:17:16.716 "data_size": 65536 00:17:16.716 } 00:17:16.716 ] 00:17:16.716 }' 00:17:16.716 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.716 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.287 [2024-10-17 20:14:02.698722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:17.287 [2024-10-17 20:14:02.698838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.287 [2024-10-17 20:14:02.776783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:17.287 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.288 [2024-10-17 20:14:02.836851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:17.288 [2024-10-17 20:14:02.836913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:17.288 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.546 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:17.546 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:17.546 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:17.546 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:17.546 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:17.546 20:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:17.546 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.546 20:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.546 BaseBdev2 00:17:17.546 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.547 [ 00:17:17.547 { 00:17:17.547 "name": "BaseBdev2", 00:17:17.547 "aliases": [ 00:17:17.547 "47542be0-e059-4759-a056-b747435dba69" 00:17:17.547 ], 00:17:17.547 "product_name": "Malloc disk", 00:17:17.547 "block_size": 512, 00:17:17.547 "num_blocks": 65536, 00:17:17.547 "uuid": "47542be0-e059-4759-a056-b747435dba69", 00:17:17.547 "assigned_rate_limits": { 00:17:17.547 "rw_ios_per_sec": 0, 00:17:17.547 "rw_mbytes_per_sec": 0, 00:17:17.547 "r_mbytes_per_sec": 0, 00:17:17.547 "w_mbytes_per_sec": 0 00:17:17.547 }, 00:17:17.547 "claimed": false, 00:17:17.547 "zoned": false, 00:17:17.547 "supported_io_types": { 00:17:17.547 "read": true, 00:17:17.547 "write": true, 00:17:17.547 "unmap": true, 00:17:17.547 "flush": true, 00:17:17.547 "reset": true, 00:17:17.547 "nvme_admin": false, 00:17:17.547 "nvme_io": false, 00:17:17.547 "nvme_io_md": false, 00:17:17.547 "write_zeroes": true, 00:17:17.547 "zcopy": true, 00:17:17.547 "get_zone_info": false, 00:17:17.547 "zone_management": false, 00:17:17.547 "zone_append": false, 00:17:17.547 "compare": false, 00:17:17.547 "compare_and_write": false, 00:17:17.547 "abort": true, 00:17:17.547 "seek_hole": false, 00:17:17.547 "seek_data": false, 00:17:17.547 "copy": true, 00:17:17.547 "nvme_iov_md": false 00:17:17.547 }, 00:17:17.547 "memory_domains": [ 00:17:17.547 { 00:17:17.547 "dma_device_id": "system", 00:17:17.547 "dma_device_type": 1 00:17:17.547 }, 00:17:17.547 { 00:17:17.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.547 "dma_device_type": 2 00:17:17.547 } 00:17:17.547 ], 00:17:17.547 "driver_specific": {} 00:17:17.547 } 00:17:17.547 ] 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.547 BaseBdev3 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.547 [ 00:17:17.547 { 00:17:17.547 "name": "BaseBdev3", 00:17:17.547 "aliases": [ 00:17:17.547 "d2851559-f8ba-4b05-8d92-0f0f3e11dfa1" 00:17:17.547 ], 00:17:17.547 "product_name": "Malloc disk", 00:17:17.547 "block_size": 512, 00:17:17.547 "num_blocks": 65536, 00:17:17.547 "uuid": "d2851559-f8ba-4b05-8d92-0f0f3e11dfa1", 00:17:17.547 "assigned_rate_limits": { 00:17:17.547 "rw_ios_per_sec": 0, 00:17:17.547 "rw_mbytes_per_sec": 0, 00:17:17.547 "r_mbytes_per_sec": 0, 00:17:17.547 "w_mbytes_per_sec": 0 00:17:17.547 }, 00:17:17.547 "claimed": false, 00:17:17.547 "zoned": false, 00:17:17.547 "supported_io_types": { 00:17:17.547 "read": true, 00:17:17.547 "write": true, 00:17:17.547 "unmap": true, 00:17:17.547 "flush": true, 00:17:17.547 "reset": true, 00:17:17.547 "nvme_admin": false, 00:17:17.547 "nvme_io": false, 00:17:17.547 "nvme_io_md": false, 00:17:17.547 "write_zeroes": true, 00:17:17.547 "zcopy": true, 00:17:17.547 "get_zone_info": false, 00:17:17.547 "zone_management": false, 00:17:17.547 "zone_append": false, 00:17:17.547 "compare": false, 00:17:17.547 "compare_and_write": false, 00:17:17.547 "abort": true, 00:17:17.547 "seek_hole": false, 00:17:17.547 "seek_data": false, 00:17:17.547 "copy": true, 00:17:17.547 "nvme_iov_md": false 00:17:17.547 }, 00:17:17.547 "memory_domains": [ 00:17:17.547 { 00:17:17.547 "dma_device_id": "system", 00:17:17.547 "dma_device_type": 1 00:17:17.547 }, 00:17:17.547 { 00:17:17.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.547 "dma_device_type": 2 00:17:17.547 } 00:17:17.547 ], 00:17:17.547 "driver_specific": {} 00:17:17.547 } 00:17:17.547 ] 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.547 [2024-10-17 20:14:03.123737] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:17.547 [2024-10-17 20:14:03.123963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:17.547 [2024-10-17 20:14:03.124189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:17.547 [2024-10-17 20:14:03.126598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.547 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.547 "name": "Existed_Raid", 00:17:17.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.547 "strip_size_kb": 64, 00:17:17.547 "state": "configuring", 00:17:17.547 "raid_level": "raid5f", 00:17:17.547 "superblock": false, 00:17:17.547 "num_base_bdevs": 3, 00:17:17.547 "num_base_bdevs_discovered": 2, 00:17:17.547 "num_base_bdevs_operational": 3, 00:17:17.547 "base_bdevs_list": [ 00:17:17.547 { 00:17:17.547 "name": "BaseBdev1", 00:17:17.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.547 "is_configured": false, 00:17:17.547 "data_offset": 0, 00:17:17.547 "data_size": 0 00:17:17.547 }, 00:17:17.547 { 00:17:17.547 "name": "BaseBdev2", 00:17:17.547 "uuid": "47542be0-e059-4759-a056-b747435dba69", 00:17:17.547 "is_configured": true, 00:17:17.547 "data_offset": 0, 00:17:17.547 "data_size": 65536 00:17:17.547 }, 00:17:17.547 { 00:17:17.547 "name": "BaseBdev3", 00:17:17.547 "uuid": "d2851559-f8ba-4b05-8d92-0f0f3e11dfa1", 00:17:17.547 "is_configured": true, 00:17:17.547 "data_offset": 0, 00:17:17.547 "data_size": 65536 00:17:17.547 } 00:17:17.547 ] 00:17:17.547 }' 00:17:17.548 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.548 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.114 [2024-10-17 20:14:03.663852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.114 "name": "Existed_Raid", 00:17:18.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.114 "strip_size_kb": 64, 00:17:18.114 "state": "configuring", 00:17:18.114 "raid_level": "raid5f", 00:17:18.114 "superblock": false, 00:17:18.114 "num_base_bdevs": 3, 00:17:18.114 "num_base_bdevs_discovered": 1, 00:17:18.114 "num_base_bdevs_operational": 3, 00:17:18.114 "base_bdevs_list": [ 00:17:18.114 { 00:17:18.114 "name": "BaseBdev1", 00:17:18.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.114 "is_configured": false, 00:17:18.114 "data_offset": 0, 00:17:18.114 "data_size": 0 00:17:18.114 }, 00:17:18.114 { 00:17:18.114 "name": null, 00:17:18.114 "uuid": "47542be0-e059-4759-a056-b747435dba69", 00:17:18.114 "is_configured": false, 00:17:18.114 "data_offset": 0, 00:17:18.114 "data_size": 65536 00:17:18.114 }, 00:17:18.114 { 00:17:18.114 "name": "BaseBdev3", 00:17:18.114 "uuid": "d2851559-f8ba-4b05-8d92-0f0f3e11dfa1", 00:17:18.114 "is_configured": true, 00:17:18.114 "data_offset": 0, 00:17:18.114 "data_size": 65536 00:17:18.114 } 00:17:18.114 ] 00:17:18.114 }' 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.114 20:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.681 [2024-10-17 20:14:04.278772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.681 BaseBdev1 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.681 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.681 [ 00:17:18.681 { 00:17:18.681 "name": "BaseBdev1", 00:17:18.681 "aliases": [ 00:17:18.681 "28137344-793a-4ab2-8482-94cc4bf124c0" 00:17:18.681 ], 00:17:18.681 "product_name": "Malloc disk", 00:17:18.681 "block_size": 512, 00:17:18.681 "num_blocks": 65536, 00:17:18.681 "uuid": "28137344-793a-4ab2-8482-94cc4bf124c0", 00:17:18.681 "assigned_rate_limits": { 00:17:18.681 "rw_ios_per_sec": 0, 00:17:18.681 "rw_mbytes_per_sec": 0, 00:17:18.681 "r_mbytes_per_sec": 0, 00:17:18.681 "w_mbytes_per_sec": 0 00:17:18.681 }, 00:17:18.681 "claimed": true, 00:17:18.681 "claim_type": "exclusive_write", 00:17:18.681 "zoned": false, 00:17:18.681 "supported_io_types": { 00:17:18.681 "read": true, 00:17:18.681 "write": true, 00:17:18.681 "unmap": true, 00:17:18.681 "flush": true, 00:17:18.681 "reset": true, 00:17:18.681 "nvme_admin": false, 00:17:18.681 "nvme_io": false, 00:17:18.682 "nvme_io_md": false, 00:17:18.682 "write_zeroes": true, 00:17:18.682 "zcopy": true, 00:17:18.682 "get_zone_info": false, 00:17:18.682 "zone_management": false, 00:17:18.682 "zone_append": false, 00:17:18.682 "compare": false, 00:17:18.682 "compare_and_write": false, 00:17:18.682 "abort": true, 00:17:18.682 "seek_hole": false, 00:17:18.682 "seek_data": false, 00:17:18.682 "copy": true, 00:17:18.682 "nvme_iov_md": false 00:17:18.682 }, 00:17:18.682 "memory_domains": [ 00:17:18.682 { 00:17:18.682 "dma_device_id": "system", 00:17:18.682 "dma_device_type": 1 00:17:18.682 }, 00:17:18.682 { 00:17:18.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.682 "dma_device_type": 2 00:17:18.682 } 00:17:18.682 ], 00:17:18.682 "driver_specific": {} 00:17:18.682 } 00:17:18.682 ] 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.682 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.940 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.940 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.940 "name": "Existed_Raid", 00:17:18.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.940 "strip_size_kb": 64, 00:17:18.940 "state": "configuring", 00:17:18.940 "raid_level": "raid5f", 00:17:18.940 "superblock": false, 00:17:18.940 "num_base_bdevs": 3, 00:17:18.940 "num_base_bdevs_discovered": 2, 00:17:18.940 "num_base_bdevs_operational": 3, 00:17:18.940 "base_bdevs_list": [ 00:17:18.940 { 00:17:18.940 "name": "BaseBdev1", 00:17:18.940 "uuid": "28137344-793a-4ab2-8482-94cc4bf124c0", 00:17:18.940 "is_configured": true, 00:17:18.940 "data_offset": 0, 00:17:18.940 "data_size": 65536 00:17:18.940 }, 00:17:18.940 { 00:17:18.940 "name": null, 00:17:18.940 "uuid": "47542be0-e059-4759-a056-b747435dba69", 00:17:18.940 "is_configured": false, 00:17:18.940 "data_offset": 0, 00:17:18.940 "data_size": 65536 00:17:18.940 }, 00:17:18.940 { 00:17:18.940 "name": "BaseBdev3", 00:17:18.940 "uuid": "d2851559-f8ba-4b05-8d92-0f0f3e11dfa1", 00:17:18.940 "is_configured": true, 00:17:18.940 "data_offset": 0, 00:17:18.940 "data_size": 65536 00:17:18.940 } 00:17:18.940 ] 00:17:18.940 }' 00:17:18.940 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.940 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.199 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.199 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.199 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:19.199 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.199 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.457 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:19.457 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:19.457 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.457 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.457 [2024-10-17 20:14:04.883060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:19.457 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.458 "name": "Existed_Raid", 00:17:19.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.458 "strip_size_kb": 64, 00:17:19.458 "state": "configuring", 00:17:19.458 "raid_level": "raid5f", 00:17:19.458 "superblock": false, 00:17:19.458 "num_base_bdevs": 3, 00:17:19.458 "num_base_bdevs_discovered": 1, 00:17:19.458 "num_base_bdevs_operational": 3, 00:17:19.458 "base_bdevs_list": [ 00:17:19.458 { 00:17:19.458 "name": "BaseBdev1", 00:17:19.458 "uuid": "28137344-793a-4ab2-8482-94cc4bf124c0", 00:17:19.458 "is_configured": true, 00:17:19.458 "data_offset": 0, 00:17:19.458 "data_size": 65536 00:17:19.458 }, 00:17:19.458 { 00:17:19.458 "name": null, 00:17:19.458 "uuid": "47542be0-e059-4759-a056-b747435dba69", 00:17:19.458 "is_configured": false, 00:17:19.458 "data_offset": 0, 00:17:19.458 "data_size": 65536 00:17:19.458 }, 00:17:19.458 { 00:17:19.458 "name": null, 00:17:19.458 "uuid": "d2851559-f8ba-4b05-8d92-0f0f3e11dfa1", 00:17:19.458 "is_configured": false, 00:17:19.458 "data_offset": 0, 00:17:19.458 "data_size": 65536 00:17:19.458 } 00:17:19.458 ] 00:17:19.458 }' 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.458 20:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.024 [2024-10-17 20:14:05.471227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.024 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.025 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.025 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.025 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.025 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.025 20:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.025 20:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.025 20:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.025 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.025 "name": "Existed_Raid", 00:17:20.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.025 "strip_size_kb": 64, 00:17:20.025 "state": "configuring", 00:17:20.025 "raid_level": "raid5f", 00:17:20.025 "superblock": false, 00:17:20.025 "num_base_bdevs": 3, 00:17:20.025 "num_base_bdevs_discovered": 2, 00:17:20.025 "num_base_bdevs_operational": 3, 00:17:20.025 "base_bdevs_list": [ 00:17:20.025 { 00:17:20.025 "name": "BaseBdev1", 00:17:20.025 "uuid": "28137344-793a-4ab2-8482-94cc4bf124c0", 00:17:20.025 "is_configured": true, 00:17:20.025 "data_offset": 0, 00:17:20.025 "data_size": 65536 00:17:20.025 }, 00:17:20.025 { 00:17:20.025 "name": null, 00:17:20.025 "uuid": "47542be0-e059-4759-a056-b747435dba69", 00:17:20.025 "is_configured": false, 00:17:20.025 "data_offset": 0, 00:17:20.025 "data_size": 65536 00:17:20.025 }, 00:17:20.025 { 00:17:20.025 "name": "BaseBdev3", 00:17:20.025 "uuid": "d2851559-f8ba-4b05-8d92-0f0f3e11dfa1", 00:17:20.025 "is_configured": true, 00:17:20.025 "data_offset": 0, 00:17:20.025 "data_size": 65536 00:17:20.025 } 00:17:20.025 ] 00:17:20.025 }' 00:17:20.025 20:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.025 20:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.619 [2024-10-17 20:14:06.063546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.619 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.620 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.620 "name": "Existed_Raid", 00:17:20.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.620 "strip_size_kb": 64, 00:17:20.620 "state": "configuring", 00:17:20.620 "raid_level": "raid5f", 00:17:20.620 "superblock": false, 00:17:20.620 "num_base_bdevs": 3, 00:17:20.620 "num_base_bdevs_discovered": 1, 00:17:20.620 "num_base_bdevs_operational": 3, 00:17:20.620 "base_bdevs_list": [ 00:17:20.620 { 00:17:20.620 "name": null, 00:17:20.620 "uuid": "28137344-793a-4ab2-8482-94cc4bf124c0", 00:17:20.620 "is_configured": false, 00:17:20.620 "data_offset": 0, 00:17:20.620 "data_size": 65536 00:17:20.620 }, 00:17:20.620 { 00:17:20.620 "name": null, 00:17:20.620 "uuid": "47542be0-e059-4759-a056-b747435dba69", 00:17:20.620 "is_configured": false, 00:17:20.620 "data_offset": 0, 00:17:20.620 "data_size": 65536 00:17:20.620 }, 00:17:20.620 { 00:17:20.620 "name": "BaseBdev3", 00:17:20.620 "uuid": "d2851559-f8ba-4b05-8d92-0f0f3e11dfa1", 00:17:20.620 "is_configured": true, 00:17:20.620 "data_offset": 0, 00:17:20.620 "data_size": 65536 00:17:20.620 } 00:17:20.620 ] 00:17:20.620 }' 00:17:20.620 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.620 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.189 [2024-10-17 20:14:06.729310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.189 "name": "Existed_Raid", 00:17:21.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.189 "strip_size_kb": 64, 00:17:21.189 "state": "configuring", 00:17:21.189 "raid_level": "raid5f", 00:17:21.189 "superblock": false, 00:17:21.189 "num_base_bdevs": 3, 00:17:21.189 "num_base_bdevs_discovered": 2, 00:17:21.189 "num_base_bdevs_operational": 3, 00:17:21.189 "base_bdevs_list": [ 00:17:21.189 { 00:17:21.189 "name": null, 00:17:21.189 "uuid": "28137344-793a-4ab2-8482-94cc4bf124c0", 00:17:21.189 "is_configured": false, 00:17:21.189 "data_offset": 0, 00:17:21.189 "data_size": 65536 00:17:21.189 }, 00:17:21.189 { 00:17:21.189 "name": "BaseBdev2", 00:17:21.189 "uuid": "47542be0-e059-4759-a056-b747435dba69", 00:17:21.189 "is_configured": true, 00:17:21.189 "data_offset": 0, 00:17:21.189 "data_size": 65536 00:17:21.189 }, 00:17:21.189 { 00:17:21.189 "name": "BaseBdev3", 00:17:21.189 "uuid": "d2851559-f8ba-4b05-8d92-0f0f3e11dfa1", 00:17:21.189 "is_configured": true, 00:17:21.189 "data_offset": 0, 00:17:21.189 "data_size": 65536 00:17:21.189 } 00:17:21.189 ] 00:17:21.189 }' 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.189 20:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 28137344-793a-4ab2-8482-94cc4bf124c0 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.755 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.013 [2024-10-17 20:14:07.407331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:22.014 [2024-10-17 20:14:07.407405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:22.014 [2024-10-17 20:14:07.407419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:22.014 [2024-10-17 20:14:07.407701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:22.014 [2024-10-17 20:14:07.412397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:22.014 [2024-10-17 20:14:07.412436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:22.014 [2024-10-17 20:14:07.412805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.014 NewBaseBdev 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.014 [ 00:17:22.014 { 00:17:22.014 "name": "NewBaseBdev", 00:17:22.014 "aliases": [ 00:17:22.014 "28137344-793a-4ab2-8482-94cc4bf124c0" 00:17:22.014 ], 00:17:22.014 "product_name": "Malloc disk", 00:17:22.014 "block_size": 512, 00:17:22.014 "num_blocks": 65536, 00:17:22.014 "uuid": "28137344-793a-4ab2-8482-94cc4bf124c0", 00:17:22.014 "assigned_rate_limits": { 00:17:22.014 "rw_ios_per_sec": 0, 00:17:22.014 "rw_mbytes_per_sec": 0, 00:17:22.014 "r_mbytes_per_sec": 0, 00:17:22.014 "w_mbytes_per_sec": 0 00:17:22.014 }, 00:17:22.014 "claimed": true, 00:17:22.014 "claim_type": "exclusive_write", 00:17:22.014 "zoned": false, 00:17:22.014 "supported_io_types": { 00:17:22.014 "read": true, 00:17:22.014 "write": true, 00:17:22.014 "unmap": true, 00:17:22.014 "flush": true, 00:17:22.014 "reset": true, 00:17:22.014 "nvme_admin": false, 00:17:22.014 "nvme_io": false, 00:17:22.014 "nvme_io_md": false, 00:17:22.014 "write_zeroes": true, 00:17:22.014 "zcopy": true, 00:17:22.014 "get_zone_info": false, 00:17:22.014 "zone_management": false, 00:17:22.014 "zone_append": false, 00:17:22.014 "compare": false, 00:17:22.014 "compare_and_write": false, 00:17:22.014 "abort": true, 00:17:22.014 "seek_hole": false, 00:17:22.014 "seek_data": false, 00:17:22.014 "copy": true, 00:17:22.014 "nvme_iov_md": false 00:17:22.014 }, 00:17:22.014 "memory_domains": [ 00:17:22.014 { 00:17:22.014 "dma_device_id": "system", 00:17:22.014 "dma_device_type": 1 00:17:22.014 }, 00:17:22.014 { 00:17:22.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.014 "dma_device_type": 2 00:17:22.014 } 00:17:22.014 ], 00:17:22.014 "driver_specific": {} 00:17:22.014 } 00:17:22.014 ] 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.014 "name": "Existed_Raid", 00:17:22.014 "uuid": "544052e7-9830-4880-b919-096913a1c8f2", 00:17:22.014 "strip_size_kb": 64, 00:17:22.014 "state": "online", 00:17:22.014 "raid_level": "raid5f", 00:17:22.014 "superblock": false, 00:17:22.014 "num_base_bdevs": 3, 00:17:22.014 "num_base_bdevs_discovered": 3, 00:17:22.014 "num_base_bdevs_operational": 3, 00:17:22.014 "base_bdevs_list": [ 00:17:22.014 { 00:17:22.014 "name": "NewBaseBdev", 00:17:22.014 "uuid": "28137344-793a-4ab2-8482-94cc4bf124c0", 00:17:22.014 "is_configured": true, 00:17:22.014 "data_offset": 0, 00:17:22.014 "data_size": 65536 00:17:22.014 }, 00:17:22.014 { 00:17:22.014 "name": "BaseBdev2", 00:17:22.014 "uuid": "47542be0-e059-4759-a056-b747435dba69", 00:17:22.014 "is_configured": true, 00:17:22.014 "data_offset": 0, 00:17:22.014 "data_size": 65536 00:17:22.014 }, 00:17:22.014 { 00:17:22.014 "name": "BaseBdev3", 00:17:22.014 "uuid": "d2851559-f8ba-4b05-8d92-0f0f3e11dfa1", 00:17:22.014 "is_configured": true, 00:17:22.014 "data_offset": 0, 00:17:22.014 "data_size": 65536 00:17:22.014 } 00:17:22.014 ] 00:17:22.014 }' 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.014 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.582 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:22.582 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:22.582 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:22.582 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:22.582 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:22.582 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:22.582 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:22.582 20:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:22.582 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.582 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.582 [2024-10-17 20:14:07.982702] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.582 20:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:22.582 "name": "Existed_Raid", 00:17:22.582 "aliases": [ 00:17:22.582 "544052e7-9830-4880-b919-096913a1c8f2" 00:17:22.582 ], 00:17:22.582 "product_name": "Raid Volume", 00:17:22.582 "block_size": 512, 00:17:22.582 "num_blocks": 131072, 00:17:22.582 "uuid": "544052e7-9830-4880-b919-096913a1c8f2", 00:17:22.582 "assigned_rate_limits": { 00:17:22.582 "rw_ios_per_sec": 0, 00:17:22.582 "rw_mbytes_per_sec": 0, 00:17:22.582 "r_mbytes_per_sec": 0, 00:17:22.582 "w_mbytes_per_sec": 0 00:17:22.582 }, 00:17:22.582 "claimed": false, 00:17:22.582 "zoned": false, 00:17:22.582 "supported_io_types": { 00:17:22.582 "read": true, 00:17:22.582 "write": true, 00:17:22.582 "unmap": false, 00:17:22.582 "flush": false, 00:17:22.582 "reset": true, 00:17:22.582 "nvme_admin": false, 00:17:22.582 "nvme_io": false, 00:17:22.582 "nvme_io_md": false, 00:17:22.582 "write_zeroes": true, 00:17:22.582 "zcopy": false, 00:17:22.582 "get_zone_info": false, 00:17:22.582 "zone_management": false, 00:17:22.582 "zone_append": false, 00:17:22.582 "compare": false, 00:17:22.582 "compare_and_write": false, 00:17:22.582 "abort": false, 00:17:22.582 "seek_hole": false, 00:17:22.582 "seek_data": false, 00:17:22.582 "copy": false, 00:17:22.582 "nvme_iov_md": false 00:17:22.582 }, 00:17:22.582 "driver_specific": { 00:17:22.582 "raid": { 00:17:22.582 "uuid": "544052e7-9830-4880-b919-096913a1c8f2", 00:17:22.582 "strip_size_kb": 64, 00:17:22.582 "state": "online", 00:17:22.582 "raid_level": "raid5f", 00:17:22.582 "superblock": false, 00:17:22.582 "num_base_bdevs": 3, 00:17:22.582 "num_base_bdevs_discovered": 3, 00:17:22.582 "num_base_bdevs_operational": 3, 00:17:22.582 "base_bdevs_list": [ 00:17:22.582 { 00:17:22.582 "name": "NewBaseBdev", 00:17:22.582 "uuid": "28137344-793a-4ab2-8482-94cc4bf124c0", 00:17:22.582 "is_configured": true, 00:17:22.582 "data_offset": 0, 00:17:22.582 "data_size": 65536 00:17:22.582 }, 00:17:22.582 { 00:17:22.582 "name": "BaseBdev2", 00:17:22.582 "uuid": "47542be0-e059-4759-a056-b747435dba69", 00:17:22.582 "is_configured": true, 00:17:22.582 "data_offset": 0, 00:17:22.582 "data_size": 65536 00:17:22.582 }, 00:17:22.582 { 00:17:22.582 "name": "BaseBdev3", 00:17:22.582 "uuid": "d2851559-f8ba-4b05-8d92-0f0f3e11dfa1", 00:17:22.582 "is_configured": true, 00:17:22.582 "data_offset": 0, 00:17:22.582 "data_size": 65536 00:17:22.582 } 00:17:22.582 ] 00:17:22.582 } 00:17:22.582 } 00:17:22.582 }' 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:22.582 BaseBdev2 00:17:22.582 BaseBdev3' 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.582 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.841 [2024-10-17 20:14:08.318569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:22.841 [2024-10-17 20:14:08.318603] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.841 [2024-10-17 20:14:08.318698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.841 [2024-10-17 20:14:08.319086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.841 [2024-10-17 20:14:08.319109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80117 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80117 ']' 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80117 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80117 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:22.841 killing process with pid 80117 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80117' 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 80117 00:17:22.841 [2024-10-17 20:14:08.357244] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.841 20:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 80117 00:17:23.100 [2024-10-17 20:14:08.610469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.034 20:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:24.034 00:17:24.034 real 0m11.911s 00:17:24.034 user 0m19.799s 00:17:24.034 sys 0m1.707s 00:17:24.034 ************************************ 00:17:24.034 END TEST raid5f_state_function_test 00:17:24.034 ************************************ 00:17:24.034 20:14:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:24.034 20:14:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.034 20:14:09 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:17:24.034 20:14:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:24.034 20:14:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:24.034 20:14:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:24.034 ************************************ 00:17:24.034 START TEST raid5f_state_function_test_sb 00:17:24.034 ************************************ 00:17:24.034 20:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80751 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:24.035 Process raid pid: 80751 00:17:24.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80751' 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80751 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80751 ']' 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.035 20:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.294 [2024-10-17 20:14:09.782832] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:17:24.294 [2024-10-17 20:14:09.783068] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.552 [2024-10-17 20:14:09.959323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.552 [2024-10-17 20:14:10.087641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.811 [2024-10-17 20:14:10.278986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.811 [2024-10-17 20:14:10.279207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.379 [2024-10-17 20:14:10.761254] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:25.379 [2024-10-17 20:14:10.761328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:25.379 [2024-10-17 20:14:10.761352] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.379 [2024-10-17 20:14:10.761399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.379 [2024-10-17 20:14:10.761408] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:25.379 [2024-10-17 20:14:10.761422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.379 "name": "Existed_Raid", 00:17:25.379 "uuid": "82913240-5e34-45d0-80ac-6688d637997a", 00:17:25.379 "strip_size_kb": 64, 00:17:25.379 "state": "configuring", 00:17:25.379 "raid_level": "raid5f", 00:17:25.379 "superblock": true, 00:17:25.379 "num_base_bdevs": 3, 00:17:25.379 "num_base_bdevs_discovered": 0, 00:17:25.379 "num_base_bdevs_operational": 3, 00:17:25.379 "base_bdevs_list": [ 00:17:25.379 { 00:17:25.379 "name": "BaseBdev1", 00:17:25.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.379 "is_configured": false, 00:17:25.379 "data_offset": 0, 00:17:25.379 "data_size": 0 00:17:25.379 }, 00:17:25.379 { 00:17:25.379 "name": "BaseBdev2", 00:17:25.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.379 "is_configured": false, 00:17:25.379 "data_offset": 0, 00:17:25.379 "data_size": 0 00:17:25.379 }, 00:17:25.379 { 00:17:25.379 "name": "BaseBdev3", 00:17:25.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.379 "is_configured": false, 00:17:25.379 "data_offset": 0, 00:17:25.379 "data_size": 0 00:17:25.379 } 00:17:25.379 ] 00:17:25.379 }' 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.379 20:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.947 [2024-10-17 20:14:11.301804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:25.947 [2024-10-17 20:14:11.301846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.947 [2024-10-17 20:14:11.309830] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:25.947 [2024-10-17 20:14:11.309896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:25.947 [2024-10-17 20:14:11.309910] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.947 [2024-10-17 20:14:11.309924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.947 [2024-10-17 20:14:11.309933] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:25.947 [2024-10-17 20:14:11.309946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.947 [2024-10-17 20:14:11.353114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.947 BaseBdev1 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.947 [ 00:17:25.947 { 00:17:25.947 "name": "BaseBdev1", 00:17:25.947 "aliases": [ 00:17:25.947 "7afc00f9-4b65-4a75-9480-5af6f1055073" 00:17:25.947 ], 00:17:25.947 "product_name": "Malloc disk", 00:17:25.947 "block_size": 512, 00:17:25.947 "num_blocks": 65536, 00:17:25.947 "uuid": "7afc00f9-4b65-4a75-9480-5af6f1055073", 00:17:25.947 "assigned_rate_limits": { 00:17:25.947 "rw_ios_per_sec": 0, 00:17:25.947 "rw_mbytes_per_sec": 0, 00:17:25.947 "r_mbytes_per_sec": 0, 00:17:25.947 "w_mbytes_per_sec": 0 00:17:25.947 }, 00:17:25.947 "claimed": true, 00:17:25.947 "claim_type": "exclusive_write", 00:17:25.947 "zoned": false, 00:17:25.947 "supported_io_types": { 00:17:25.947 "read": true, 00:17:25.947 "write": true, 00:17:25.947 "unmap": true, 00:17:25.947 "flush": true, 00:17:25.947 "reset": true, 00:17:25.947 "nvme_admin": false, 00:17:25.947 "nvme_io": false, 00:17:25.947 "nvme_io_md": false, 00:17:25.947 "write_zeroes": true, 00:17:25.947 "zcopy": true, 00:17:25.947 "get_zone_info": false, 00:17:25.947 "zone_management": false, 00:17:25.947 "zone_append": false, 00:17:25.947 "compare": false, 00:17:25.947 "compare_and_write": false, 00:17:25.947 "abort": true, 00:17:25.947 "seek_hole": false, 00:17:25.947 "seek_data": false, 00:17:25.947 "copy": true, 00:17:25.947 "nvme_iov_md": false 00:17:25.947 }, 00:17:25.947 "memory_domains": [ 00:17:25.947 { 00:17:25.947 "dma_device_id": "system", 00:17:25.947 "dma_device_type": 1 00:17:25.947 }, 00:17:25.947 { 00:17:25.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.947 "dma_device_type": 2 00:17:25.947 } 00:17:25.947 ], 00:17:25.947 "driver_specific": {} 00:17:25.947 } 00:17:25.947 ] 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.947 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.948 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.948 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.948 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.948 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.948 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.948 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.948 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.948 "name": "Existed_Raid", 00:17:25.948 "uuid": "81d94c37-283f-44ed-955f-c56094f03245", 00:17:25.948 "strip_size_kb": 64, 00:17:25.948 "state": "configuring", 00:17:25.948 "raid_level": "raid5f", 00:17:25.948 "superblock": true, 00:17:25.948 "num_base_bdevs": 3, 00:17:25.948 "num_base_bdevs_discovered": 1, 00:17:25.948 "num_base_bdevs_operational": 3, 00:17:25.948 "base_bdevs_list": [ 00:17:25.948 { 00:17:25.948 "name": "BaseBdev1", 00:17:25.948 "uuid": "7afc00f9-4b65-4a75-9480-5af6f1055073", 00:17:25.948 "is_configured": true, 00:17:25.948 "data_offset": 2048, 00:17:25.948 "data_size": 63488 00:17:25.948 }, 00:17:25.948 { 00:17:25.948 "name": "BaseBdev2", 00:17:25.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.948 "is_configured": false, 00:17:25.948 "data_offset": 0, 00:17:25.948 "data_size": 0 00:17:25.948 }, 00:17:25.948 { 00:17:25.948 "name": "BaseBdev3", 00:17:25.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.948 "is_configured": false, 00:17:25.948 "data_offset": 0, 00:17:25.948 "data_size": 0 00:17:25.948 } 00:17:25.948 ] 00:17:25.948 }' 00:17:25.948 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.948 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.515 [2024-10-17 20:14:11.913377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.515 [2024-10-17 20:14:11.913453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.515 [2024-10-17 20:14:11.921419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.515 [2024-10-17 20:14:11.923997] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.515 [2024-10-17 20:14:11.924075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.515 [2024-10-17 20:14:11.924092] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:26.515 [2024-10-17 20:14:11.924108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.515 "name": "Existed_Raid", 00:17:26.515 "uuid": "72b7b293-9e21-4d4c-96ce-0142fe490f89", 00:17:26.515 "strip_size_kb": 64, 00:17:26.515 "state": "configuring", 00:17:26.515 "raid_level": "raid5f", 00:17:26.515 "superblock": true, 00:17:26.515 "num_base_bdevs": 3, 00:17:26.515 "num_base_bdevs_discovered": 1, 00:17:26.515 "num_base_bdevs_operational": 3, 00:17:26.515 "base_bdevs_list": [ 00:17:26.515 { 00:17:26.515 "name": "BaseBdev1", 00:17:26.515 "uuid": "7afc00f9-4b65-4a75-9480-5af6f1055073", 00:17:26.515 "is_configured": true, 00:17:26.515 "data_offset": 2048, 00:17:26.515 "data_size": 63488 00:17:26.515 }, 00:17:26.515 { 00:17:26.515 "name": "BaseBdev2", 00:17:26.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.515 "is_configured": false, 00:17:26.515 "data_offset": 0, 00:17:26.515 "data_size": 0 00:17:26.515 }, 00:17:26.515 { 00:17:26.515 "name": "BaseBdev3", 00:17:26.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.515 "is_configured": false, 00:17:26.515 "data_offset": 0, 00:17:26.515 "data_size": 0 00:17:26.515 } 00:17:26.515 ] 00:17:26.515 }' 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.515 20:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.082 [2024-10-17 20:14:12.501863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.082 BaseBdev2 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.082 [ 00:17:27.082 { 00:17:27.082 "name": "BaseBdev2", 00:17:27.082 "aliases": [ 00:17:27.082 "455cf2b5-054a-4887-a012-865fc272d10b" 00:17:27.082 ], 00:17:27.082 "product_name": "Malloc disk", 00:17:27.082 "block_size": 512, 00:17:27.082 "num_blocks": 65536, 00:17:27.082 "uuid": "455cf2b5-054a-4887-a012-865fc272d10b", 00:17:27.082 "assigned_rate_limits": { 00:17:27.082 "rw_ios_per_sec": 0, 00:17:27.082 "rw_mbytes_per_sec": 0, 00:17:27.082 "r_mbytes_per_sec": 0, 00:17:27.082 "w_mbytes_per_sec": 0 00:17:27.082 }, 00:17:27.082 "claimed": true, 00:17:27.082 "claim_type": "exclusive_write", 00:17:27.082 "zoned": false, 00:17:27.082 "supported_io_types": { 00:17:27.082 "read": true, 00:17:27.082 "write": true, 00:17:27.082 "unmap": true, 00:17:27.082 "flush": true, 00:17:27.082 "reset": true, 00:17:27.082 "nvme_admin": false, 00:17:27.082 "nvme_io": false, 00:17:27.082 "nvme_io_md": false, 00:17:27.082 "write_zeroes": true, 00:17:27.082 "zcopy": true, 00:17:27.082 "get_zone_info": false, 00:17:27.082 "zone_management": false, 00:17:27.082 "zone_append": false, 00:17:27.082 "compare": false, 00:17:27.082 "compare_and_write": false, 00:17:27.082 "abort": true, 00:17:27.082 "seek_hole": false, 00:17:27.082 "seek_data": false, 00:17:27.082 "copy": true, 00:17:27.082 "nvme_iov_md": false 00:17:27.082 }, 00:17:27.082 "memory_domains": [ 00:17:27.082 { 00:17:27.082 "dma_device_id": "system", 00:17:27.082 "dma_device_type": 1 00:17:27.082 }, 00:17:27.082 { 00:17:27.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.082 "dma_device_type": 2 00:17:27.082 } 00:17:27.082 ], 00:17:27.082 "driver_specific": {} 00:17:27.082 } 00:17:27.082 ] 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.082 "name": "Existed_Raid", 00:17:27.082 "uuid": "72b7b293-9e21-4d4c-96ce-0142fe490f89", 00:17:27.082 "strip_size_kb": 64, 00:17:27.082 "state": "configuring", 00:17:27.082 "raid_level": "raid5f", 00:17:27.082 "superblock": true, 00:17:27.082 "num_base_bdevs": 3, 00:17:27.082 "num_base_bdevs_discovered": 2, 00:17:27.082 "num_base_bdevs_operational": 3, 00:17:27.082 "base_bdevs_list": [ 00:17:27.082 { 00:17:27.082 "name": "BaseBdev1", 00:17:27.082 "uuid": "7afc00f9-4b65-4a75-9480-5af6f1055073", 00:17:27.082 "is_configured": true, 00:17:27.082 "data_offset": 2048, 00:17:27.082 "data_size": 63488 00:17:27.082 }, 00:17:27.082 { 00:17:27.082 "name": "BaseBdev2", 00:17:27.082 "uuid": "455cf2b5-054a-4887-a012-865fc272d10b", 00:17:27.082 "is_configured": true, 00:17:27.082 "data_offset": 2048, 00:17:27.082 "data_size": 63488 00:17:27.082 }, 00:17:27.082 { 00:17:27.082 "name": "BaseBdev3", 00:17:27.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.082 "is_configured": false, 00:17:27.082 "data_offset": 0, 00:17:27.082 "data_size": 0 00:17:27.082 } 00:17:27.082 ] 00:17:27.082 }' 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.082 20:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.649 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:27.649 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.649 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.649 [2024-10-17 20:14:13.103773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:27.649 [2024-10-17 20:14:13.104446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:27.649 [2024-10-17 20:14:13.104485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:27.649 BaseBdev3 00:17:27.649 [2024-10-17 20:14:13.104859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:27.649 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.650 [2024-10-17 20:14:13.110217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:27.650 [2024-10-17 20:14:13.110242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:27.650 [2024-10-17 20:14:13.110578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.650 [ 00:17:27.650 { 00:17:27.650 "name": "BaseBdev3", 00:17:27.650 "aliases": [ 00:17:27.650 "7d5dcf48-8cea-4799-949e-f91de7028aa0" 00:17:27.650 ], 00:17:27.650 "product_name": "Malloc disk", 00:17:27.650 "block_size": 512, 00:17:27.650 "num_blocks": 65536, 00:17:27.650 "uuid": "7d5dcf48-8cea-4799-949e-f91de7028aa0", 00:17:27.650 "assigned_rate_limits": { 00:17:27.650 "rw_ios_per_sec": 0, 00:17:27.650 "rw_mbytes_per_sec": 0, 00:17:27.650 "r_mbytes_per_sec": 0, 00:17:27.650 "w_mbytes_per_sec": 0 00:17:27.650 }, 00:17:27.650 "claimed": true, 00:17:27.650 "claim_type": "exclusive_write", 00:17:27.650 "zoned": false, 00:17:27.650 "supported_io_types": { 00:17:27.650 "read": true, 00:17:27.650 "write": true, 00:17:27.650 "unmap": true, 00:17:27.650 "flush": true, 00:17:27.650 "reset": true, 00:17:27.650 "nvme_admin": false, 00:17:27.650 "nvme_io": false, 00:17:27.650 "nvme_io_md": false, 00:17:27.650 "write_zeroes": true, 00:17:27.650 "zcopy": true, 00:17:27.650 "get_zone_info": false, 00:17:27.650 "zone_management": false, 00:17:27.650 "zone_append": false, 00:17:27.650 "compare": false, 00:17:27.650 "compare_and_write": false, 00:17:27.650 "abort": true, 00:17:27.650 "seek_hole": false, 00:17:27.650 "seek_data": false, 00:17:27.650 "copy": true, 00:17:27.650 "nvme_iov_md": false 00:17:27.650 }, 00:17:27.650 "memory_domains": [ 00:17:27.650 { 00:17:27.650 "dma_device_id": "system", 00:17:27.650 "dma_device_type": 1 00:17:27.650 }, 00:17:27.650 { 00:17:27.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.650 "dma_device_type": 2 00:17:27.650 } 00:17:27.650 ], 00:17:27.650 "driver_specific": {} 00:17:27.650 } 00:17:27.650 ] 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.650 "name": "Existed_Raid", 00:17:27.650 "uuid": "72b7b293-9e21-4d4c-96ce-0142fe490f89", 00:17:27.650 "strip_size_kb": 64, 00:17:27.650 "state": "online", 00:17:27.650 "raid_level": "raid5f", 00:17:27.650 "superblock": true, 00:17:27.650 "num_base_bdevs": 3, 00:17:27.650 "num_base_bdevs_discovered": 3, 00:17:27.650 "num_base_bdevs_operational": 3, 00:17:27.650 "base_bdevs_list": [ 00:17:27.650 { 00:17:27.650 "name": "BaseBdev1", 00:17:27.650 "uuid": "7afc00f9-4b65-4a75-9480-5af6f1055073", 00:17:27.650 "is_configured": true, 00:17:27.650 "data_offset": 2048, 00:17:27.650 "data_size": 63488 00:17:27.650 }, 00:17:27.650 { 00:17:27.650 "name": "BaseBdev2", 00:17:27.650 "uuid": "455cf2b5-054a-4887-a012-865fc272d10b", 00:17:27.650 "is_configured": true, 00:17:27.650 "data_offset": 2048, 00:17:27.650 "data_size": 63488 00:17:27.650 }, 00:17:27.650 { 00:17:27.650 "name": "BaseBdev3", 00:17:27.650 "uuid": "7d5dcf48-8cea-4799-949e-f91de7028aa0", 00:17:27.650 "is_configured": true, 00:17:27.650 "data_offset": 2048, 00:17:27.650 "data_size": 63488 00:17:27.650 } 00:17:27.650 ] 00:17:27.650 }' 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.650 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.284 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:28.284 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:28.284 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.285 [2024-10-17 20:14:13.668472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:28.285 "name": "Existed_Raid", 00:17:28.285 "aliases": [ 00:17:28.285 "72b7b293-9e21-4d4c-96ce-0142fe490f89" 00:17:28.285 ], 00:17:28.285 "product_name": "Raid Volume", 00:17:28.285 "block_size": 512, 00:17:28.285 "num_blocks": 126976, 00:17:28.285 "uuid": "72b7b293-9e21-4d4c-96ce-0142fe490f89", 00:17:28.285 "assigned_rate_limits": { 00:17:28.285 "rw_ios_per_sec": 0, 00:17:28.285 "rw_mbytes_per_sec": 0, 00:17:28.285 "r_mbytes_per_sec": 0, 00:17:28.285 "w_mbytes_per_sec": 0 00:17:28.285 }, 00:17:28.285 "claimed": false, 00:17:28.285 "zoned": false, 00:17:28.285 "supported_io_types": { 00:17:28.285 "read": true, 00:17:28.285 "write": true, 00:17:28.285 "unmap": false, 00:17:28.285 "flush": false, 00:17:28.285 "reset": true, 00:17:28.285 "nvme_admin": false, 00:17:28.285 "nvme_io": false, 00:17:28.285 "nvme_io_md": false, 00:17:28.285 "write_zeroes": true, 00:17:28.285 "zcopy": false, 00:17:28.285 "get_zone_info": false, 00:17:28.285 "zone_management": false, 00:17:28.285 "zone_append": false, 00:17:28.285 "compare": false, 00:17:28.285 "compare_and_write": false, 00:17:28.285 "abort": false, 00:17:28.285 "seek_hole": false, 00:17:28.285 "seek_data": false, 00:17:28.285 "copy": false, 00:17:28.285 "nvme_iov_md": false 00:17:28.285 }, 00:17:28.285 "driver_specific": { 00:17:28.285 "raid": { 00:17:28.285 "uuid": "72b7b293-9e21-4d4c-96ce-0142fe490f89", 00:17:28.285 "strip_size_kb": 64, 00:17:28.285 "state": "online", 00:17:28.285 "raid_level": "raid5f", 00:17:28.285 "superblock": true, 00:17:28.285 "num_base_bdevs": 3, 00:17:28.285 "num_base_bdevs_discovered": 3, 00:17:28.285 "num_base_bdevs_operational": 3, 00:17:28.285 "base_bdevs_list": [ 00:17:28.285 { 00:17:28.285 "name": "BaseBdev1", 00:17:28.285 "uuid": "7afc00f9-4b65-4a75-9480-5af6f1055073", 00:17:28.285 "is_configured": true, 00:17:28.285 "data_offset": 2048, 00:17:28.285 "data_size": 63488 00:17:28.285 }, 00:17:28.285 { 00:17:28.285 "name": "BaseBdev2", 00:17:28.285 "uuid": "455cf2b5-054a-4887-a012-865fc272d10b", 00:17:28.285 "is_configured": true, 00:17:28.285 "data_offset": 2048, 00:17:28.285 "data_size": 63488 00:17:28.285 }, 00:17:28.285 { 00:17:28.285 "name": "BaseBdev3", 00:17:28.285 "uuid": "7d5dcf48-8cea-4799-949e-f91de7028aa0", 00:17:28.285 "is_configured": true, 00:17:28.285 "data_offset": 2048, 00:17:28.285 "data_size": 63488 00:17:28.285 } 00:17:28.285 ] 00:17:28.285 } 00:17:28.285 } 00:17:28.285 }' 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:28.285 BaseBdev2 00:17:28.285 BaseBdev3' 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.285 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.544 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:28.544 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:28.544 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.544 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:28.544 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.544 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.544 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.544 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.544 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:28.544 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:28.544 20:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:28.544 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.544 20:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.544 [2024-10-17 20:14:13.996341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.544 "name": "Existed_Raid", 00:17:28.544 "uuid": "72b7b293-9e21-4d4c-96ce-0142fe490f89", 00:17:28.544 "strip_size_kb": 64, 00:17:28.544 "state": "online", 00:17:28.544 "raid_level": "raid5f", 00:17:28.544 "superblock": true, 00:17:28.544 "num_base_bdevs": 3, 00:17:28.544 "num_base_bdevs_discovered": 2, 00:17:28.544 "num_base_bdevs_operational": 2, 00:17:28.544 "base_bdevs_list": [ 00:17:28.544 { 00:17:28.544 "name": null, 00:17:28.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.544 "is_configured": false, 00:17:28.544 "data_offset": 0, 00:17:28.544 "data_size": 63488 00:17:28.544 }, 00:17:28.544 { 00:17:28.544 "name": "BaseBdev2", 00:17:28.544 "uuid": "455cf2b5-054a-4887-a012-865fc272d10b", 00:17:28.544 "is_configured": true, 00:17:28.544 "data_offset": 2048, 00:17:28.544 "data_size": 63488 00:17:28.544 }, 00:17:28.544 { 00:17:28.544 "name": "BaseBdev3", 00:17:28.544 "uuid": "7d5dcf48-8cea-4799-949e-f91de7028aa0", 00:17:28.544 "is_configured": true, 00:17:28.544 "data_offset": 2048, 00:17:28.544 "data_size": 63488 00:17:28.544 } 00:17:28.544 ] 00:17:28.544 }' 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.544 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.110 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:29.110 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:29.110 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.110 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:29.110 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.110 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.110 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.110 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:29.110 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:29.110 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:29.110 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.110 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.110 [2024-10-17 20:14:14.680919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:29.110 [2024-10-17 20:14:14.681149] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.368 [2024-10-17 20:14:14.761963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.368 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.368 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:29.368 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.369 [2024-10-17 20:14:14.826052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:29.369 [2024-10-17 20:14:14.826274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.369 20:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.369 BaseBdev2 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.369 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.627 [ 00:17:29.627 { 00:17:29.627 "name": "BaseBdev2", 00:17:29.627 "aliases": [ 00:17:29.627 "83251659-a97b-40da-971f-4c1b191ff62d" 00:17:29.627 ], 00:17:29.627 "product_name": "Malloc disk", 00:17:29.627 "block_size": 512, 00:17:29.627 "num_blocks": 65536, 00:17:29.627 "uuid": "83251659-a97b-40da-971f-4c1b191ff62d", 00:17:29.628 "assigned_rate_limits": { 00:17:29.628 "rw_ios_per_sec": 0, 00:17:29.628 "rw_mbytes_per_sec": 0, 00:17:29.628 "r_mbytes_per_sec": 0, 00:17:29.628 "w_mbytes_per_sec": 0 00:17:29.628 }, 00:17:29.628 "claimed": false, 00:17:29.628 "zoned": false, 00:17:29.628 "supported_io_types": { 00:17:29.628 "read": true, 00:17:29.628 "write": true, 00:17:29.628 "unmap": true, 00:17:29.628 "flush": true, 00:17:29.628 "reset": true, 00:17:29.628 "nvme_admin": false, 00:17:29.628 "nvme_io": false, 00:17:29.628 "nvme_io_md": false, 00:17:29.628 "write_zeroes": true, 00:17:29.628 "zcopy": true, 00:17:29.628 "get_zone_info": false, 00:17:29.628 "zone_management": false, 00:17:29.628 "zone_append": false, 00:17:29.628 "compare": false, 00:17:29.628 "compare_and_write": false, 00:17:29.628 "abort": true, 00:17:29.628 "seek_hole": false, 00:17:29.628 "seek_data": false, 00:17:29.628 "copy": true, 00:17:29.628 "nvme_iov_md": false 00:17:29.628 }, 00:17:29.628 "memory_domains": [ 00:17:29.628 { 00:17:29.628 "dma_device_id": "system", 00:17:29.628 "dma_device_type": 1 00:17:29.628 }, 00:17:29.628 { 00:17:29.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.628 "dma_device_type": 2 00:17:29.628 } 00:17:29.628 ], 00:17:29.628 "driver_specific": {} 00:17:29.628 } 00:17:29.628 ] 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.628 BaseBdev3 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.628 [ 00:17:29.628 { 00:17:29.628 "name": "BaseBdev3", 00:17:29.628 "aliases": [ 00:17:29.628 "56cefdbf-77d9-404a-b672-78ce17885ff6" 00:17:29.628 ], 00:17:29.628 "product_name": "Malloc disk", 00:17:29.628 "block_size": 512, 00:17:29.628 "num_blocks": 65536, 00:17:29.628 "uuid": "56cefdbf-77d9-404a-b672-78ce17885ff6", 00:17:29.628 "assigned_rate_limits": { 00:17:29.628 "rw_ios_per_sec": 0, 00:17:29.628 "rw_mbytes_per_sec": 0, 00:17:29.628 "r_mbytes_per_sec": 0, 00:17:29.628 "w_mbytes_per_sec": 0 00:17:29.628 }, 00:17:29.628 "claimed": false, 00:17:29.628 "zoned": false, 00:17:29.628 "supported_io_types": { 00:17:29.628 "read": true, 00:17:29.628 "write": true, 00:17:29.628 "unmap": true, 00:17:29.628 "flush": true, 00:17:29.628 "reset": true, 00:17:29.628 "nvme_admin": false, 00:17:29.628 "nvme_io": false, 00:17:29.628 "nvme_io_md": false, 00:17:29.628 "write_zeroes": true, 00:17:29.628 "zcopy": true, 00:17:29.628 "get_zone_info": false, 00:17:29.628 "zone_management": false, 00:17:29.628 "zone_append": false, 00:17:29.628 "compare": false, 00:17:29.628 "compare_and_write": false, 00:17:29.628 "abort": true, 00:17:29.628 "seek_hole": false, 00:17:29.628 "seek_data": false, 00:17:29.628 "copy": true, 00:17:29.628 "nvme_iov_md": false 00:17:29.628 }, 00:17:29.628 "memory_domains": [ 00:17:29.628 { 00:17:29.628 "dma_device_id": "system", 00:17:29.628 "dma_device_type": 1 00:17:29.628 }, 00:17:29.628 { 00:17:29.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.628 "dma_device_type": 2 00:17:29.628 } 00:17:29.628 ], 00:17:29.628 "driver_specific": {} 00:17:29.628 } 00:17:29.628 ] 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.628 [2024-10-17 20:14:15.120127] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:29.628 [2024-10-17 20:14:15.120199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:29.628 [2024-10-17 20:14:15.120248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:29.628 [2024-10-17 20:14:15.122829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.628 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.628 "name": "Existed_Raid", 00:17:29.628 "uuid": "d05ebdfe-757d-4ed4-9750-5a004c9bfaee", 00:17:29.628 "strip_size_kb": 64, 00:17:29.628 "state": "configuring", 00:17:29.628 "raid_level": "raid5f", 00:17:29.628 "superblock": true, 00:17:29.628 "num_base_bdevs": 3, 00:17:29.628 "num_base_bdevs_discovered": 2, 00:17:29.628 "num_base_bdevs_operational": 3, 00:17:29.628 "base_bdevs_list": [ 00:17:29.628 { 00:17:29.628 "name": "BaseBdev1", 00:17:29.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.628 "is_configured": false, 00:17:29.628 "data_offset": 0, 00:17:29.628 "data_size": 0 00:17:29.628 }, 00:17:29.628 { 00:17:29.628 "name": "BaseBdev2", 00:17:29.628 "uuid": "83251659-a97b-40da-971f-4c1b191ff62d", 00:17:29.628 "is_configured": true, 00:17:29.628 "data_offset": 2048, 00:17:29.628 "data_size": 63488 00:17:29.629 }, 00:17:29.629 { 00:17:29.629 "name": "BaseBdev3", 00:17:29.629 "uuid": "56cefdbf-77d9-404a-b672-78ce17885ff6", 00:17:29.629 "is_configured": true, 00:17:29.629 "data_offset": 2048, 00:17:29.629 "data_size": 63488 00:17:29.629 } 00:17:29.629 ] 00:17:29.629 }' 00:17:29.629 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.629 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.195 [2024-10-17 20:14:15.660252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.195 "name": "Existed_Raid", 00:17:30.195 "uuid": "d05ebdfe-757d-4ed4-9750-5a004c9bfaee", 00:17:30.195 "strip_size_kb": 64, 00:17:30.195 "state": "configuring", 00:17:30.195 "raid_level": "raid5f", 00:17:30.195 "superblock": true, 00:17:30.195 "num_base_bdevs": 3, 00:17:30.195 "num_base_bdevs_discovered": 1, 00:17:30.195 "num_base_bdevs_operational": 3, 00:17:30.195 "base_bdevs_list": [ 00:17:30.195 { 00:17:30.195 "name": "BaseBdev1", 00:17:30.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.195 "is_configured": false, 00:17:30.195 "data_offset": 0, 00:17:30.195 "data_size": 0 00:17:30.195 }, 00:17:30.195 { 00:17:30.195 "name": null, 00:17:30.195 "uuid": "83251659-a97b-40da-971f-4c1b191ff62d", 00:17:30.195 "is_configured": false, 00:17:30.195 "data_offset": 0, 00:17:30.195 "data_size": 63488 00:17:30.195 }, 00:17:30.195 { 00:17:30.195 "name": "BaseBdev3", 00:17:30.195 "uuid": "56cefdbf-77d9-404a-b672-78ce17885ff6", 00:17:30.195 "is_configured": true, 00:17:30.195 "data_offset": 2048, 00:17:30.195 "data_size": 63488 00:17:30.195 } 00:17:30.195 ] 00:17:30.195 }' 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.195 20:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 [2024-10-17 20:14:16.294708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.762 BaseBdev1 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.762 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 [ 00:17:30.762 { 00:17:30.762 "name": "BaseBdev1", 00:17:30.762 "aliases": [ 00:17:30.762 "1e16fc8b-c830-4dd4-8d65-fb7d2ddde222" 00:17:30.762 ], 00:17:30.762 "product_name": "Malloc disk", 00:17:30.762 "block_size": 512, 00:17:30.762 "num_blocks": 65536, 00:17:30.762 "uuid": "1e16fc8b-c830-4dd4-8d65-fb7d2ddde222", 00:17:30.762 "assigned_rate_limits": { 00:17:30.763 "rw_ios_per_sec": 0, 00:17:30.763 "rw_mbytes_per_sec": 0, 00:17:30.763 "r_mbytes_per_sec": 0, 00:17:30.763 "w_mbytes_per_sec": 0 00:17:30.763 }, 00:17:30.763 "claimed": true, 00:17:30.763 "claim_type": "exclusive_write", 00:17:30.763 "zoned": false, 00:17:30.763 "supported_io_types": { 00:17:30.763 "read": true, 00:17:30.763 "write": true, 00:17:30.763 "unmap": true, 00:17:30.763 "flush": true, 00:17:30.763 "reset": true, 00:17:30.763 "nvme_admin": false, 00:17:30.763 "nvme_io": false, 00:17:30.763 "nvme_io_md": false, 00:17:30.763 "write_zeroes": true, 00:17:30.763 "zcopy": true, 00:17:30.763 "get_zone_info": false, 00:17:30.763 "zone_management": false, 00:17:30.763 "zone_append": false, 00:17:30.763 "compare": false, 00:17:30.763 "compare_and_write": false, 00:17:30.763 "abort": true, 00:17:30.763 "seek_hole": false, 00:17:30.763 "seek_data": false, 00:17:30.763 "copy": true, 00:17:30.763 "nvme_iov_md": false 00:17:30.763 }, 00:17:30.763 "memory_domains": [ 00:17:30.763 { 00:17:30.763 "dma_device_id": "system", 00:17:30.763 "dma_device_type": 1 00:17:30.763 }, 00:17:30.763 { 00:17:30.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.763 "dma_device_type": 2 00:17:30.763 } 00:17:30.763 ], 00:17:30.763 "driver_specific": {} 00:17:30.763 } 00:17:30.763 ] 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.763 "name": "Existed_Raid", 00:17:30.763 "uuid": "d05ebdfe-757d-4ed4-9750-5a004c9bfaee", 00:17:30.763 "strip_size_kb": 64, 00:17:30.763 "state": "configuring", 00:17:30.763 "raid_level": "raid5f", 00:17:30.763 "superblock": true, 00:17:30.763 "num_base_bdevs": 3, 00:17:30.763 "num_base_bdevs_discovered": 2, 00:17:30.763 "num_base_bdevs_operational": 3, 00:17:30.763 "base_bdevs_list": [ 00:17:30.763 { 00:17:30.763 "name": "BaseBdev1", 00:17:30.763 "uuid": "1e16fc8b-c830-4dd4-8d65-fb7d2ddde222", 00:17:30.763 "is_configured": true, 00:17:30.763 "data_offset": 2048, 00:17:30.763 "data_size": 63488 00:17:30.763 }, 00:17:30.763 { 00:17:30.763 "name": null, 00:17:30.763 "uuid": "83251659-a97b-40da-971f-4c1b191ff62d", 00:17:30.763 "is_configured": false, 00:17:30.763 "data_offset": 0, 00:17:30.763 "data_size": 63488 00:17:30.763 }, 00:17:30.763 { 00:17:30.763 "name": "BaseBdev3", 00:17:30.763 "uuid": "56cefdbf-77d9-404a-b672-78ce17885ff6", 00:17:30.763 "is_configured": true, 00:17:30.763 "data_offset": 2048, 00:17:30.763 "data_size": 63488 00:17:30.763 } 00:17:30.763 ] 00:17:30.763 }' 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.763 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.330 [2024-10-17 20:14:16.926923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.330 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.588 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.588 "name": "Existed_Raid", 00:17:31.588 "uuid": "d05ebdfe-757d-4ed4-9750-5a004c9bfaee", 00:17:31.588 "strip_size_kb": 64, 00:17:31.588 "state": "configuring", 00:17:31.588 "raid_level": "raid5f", 00:17:31.588 "superblock": true, 00:17:31.588 "num_base_bdevs": 3, 00:17:31.588 "num_base_bdevs_discovered": 1, 00:17:31.588 "num_base_bdevs_operational": 3, 00:17:31.588 "base_bdevs_list": [ 00:17:31.588 { 00:17:31.588 "name": "BaseBdev1", 00:17:31.588 "uuid": "1e16fc8b-c830-4dd4-8d65-fb7d2ddde222", 00:17:31.588 "is_configured": true, 00:17:31.588 "data_offset": 2048, 00:17:31.588 "data_size": 63488 00:17:31.588 }, 00:17:31.588 { 00:17:31.588 "name": null, 00:17:31.588 "uuid": "83251659-a97b-40da-971f-4c1b191ff62d", 00:17:31.588 "is_configured": false, 00:17:31.588 "data_offset": 0, 00:17:31.588 "data_size": 63488 00:17:31.588 }, 00:17:31.588 { 00:17:31.588 "name": null, 00:17:31.588 "uuid": "56cefdbf-77d9-404a-b672-78ce17885ff6", 00:17:31.588 "is_configured": false, 00:17:31.588 "data_offset": 0, 00:17:31.588 "data_size": 63488 00:17:31.588 } 00:17:31.588 ] 00:17:31.588 }' 00:17:31.588 20:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.588 20:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.846 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.846 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:31.846 20:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.846 20:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.846 20:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.103 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:32.103 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:32.103 20:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.103 20:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.103 [2024-10-17 20:14:17.511159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:32.103 20:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.103 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:32.103 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.103 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.104 "name": "Existed_Raid", 00:17:32.104 "uuid": "d05ebdfe-757d-4ed4-9750-5a004c9bfaee", 00:17:32.104 "strip_size_kb": 64, 00:17:32.104 "state": "configuring", 00:17:32.104 "raid_level": "raid5f", 00:17:32.104 "superblock": true, 00:17:32.104 "num_base_bdevs": 3, 00:17:32.104 "num_base_bdevs_discovered": 2, 00:17:32.104 "num_base_bdevs_operational": 3, 00:17:32.104 "base_bdevs_list": [ 00:17:32.104 { 00:17:32.104 "name": "BaseBdev1", 00:17:32.104 "uuid": "1e16fc8b-c830-4dd4-8d65-fb7d2ddde222", 00:17:32.104 "is_configured": true, 00:17:32.104 "data_offset": 2048, 00:17:32.104 "data_size": 63488 00:17:32.104 }, 00:17:32.104 { 00:17:32.104 "name": null, 00:17:32.104 "uuid": "83251659-a97b-40da-971f-4c1b191ff62d", 00:17:32.104 "is_configured": false, 00:17:32.104 "data_offset": 0, 00:17:32.104 "data_size": 63488 00:17:32.104 }, 00:17:32.104 { 00:17:32.104 "name": "BaseBdev3", 00:17:32.104 "uuid": "56cefdbf-77d9-404a-b672-78ce17885ff6", 00:17:32.104 "is_configured": true, 00:17:32.104 "data_offset": 2048, 00:17:32.104 "data_size": 63488 00:17:32.104 } 00:17:32.104 ] 00:17:32.104 }' 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.104 20:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.671 [2024-10-17 20:14:18.107426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.671 "name": "Existed_Raid", 00:17:32.671 "uuid": "d05ebdfe-757d-4ed4-9750-5a004c9bfaee", 00:17:32.671 "strip_size_kb": 64, 00:17:32.671 "state": "configuring", 00:17:32.671 "raid_level": "raid5f", 00:17:32.671 "superblock": true, 00:17:32.671 "num_base_bdevs": 3, 00:17:32.671 "num_base_bdevs_discovered": 1, 00:17:32.671 "num_base_bdevs_operational": 3, 00:17:32.671 "base_bdevs_list": [ 00:17:32.671 { 00:17:32.671 "name": null, 00:17:32.671 "uuid": "1e16fc8b-c830-4dd4-8d65-fb7d2ddde222", 00:17:32.671 "is_configured": false, 00:17:32.671 "data_offset": 0, 00:17:32.671 "data_size": 63488 00:17:32.671 }, 00:17:32.671 { 00:17:32.671 "name": null, 00:17:32.671 "uuid": "83251659-a97b-40da-971f-4c1b191ff62d", 00:17:32.671 "is_configured": false, 00:17:32.671 "data_offset": 0, 00:17:32.671 "data_size": 63488 00:17:32.671 }, 00:17:32.671 { 00:17:32.671 "name": "BaseBdev3", 00:17:32.671 "uuid": "56cefdbf-77d9-404a-b672-78ce17885ff6", 00:17:32.671 "is_configured": true, 00:17:32.671 "data_offset": 2048, 00:17:32.671 "data_size": 63488 00:17:32.671 } 00:17:32.671 ] 00:17:32.671 }' 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.671 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.238 [2024-10-17 20:14:18.771210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.238 "name": "Existed_Raid", 00:17:33.238 "uuid": "d05ebdfe-757d-4ed4-9750-5a004c9bfaee", 00:17:33.238 "strip_size_kb": 64, 00:17:33.238 "state": "configuring", 00:17:33.238 "raid_level": "raid5f", 00:17:33.238 "superblock": true, 00:17:33.238 "num_base_bdevs": 3, 00:17:33.238 "num_base_bdevs_discovered": 2, 00:17:33.238 "num_base_bdevs_operational": 3, 00:17:33.238 "base_bdevs_list": [ 00:17:33.238 { 00:17:33.238 "name": null, 00:17:33.238 "uuid": "1e16fc8b-c830-4dd4-8d65-fb7d2ddde222", 00:17:33.238 "is_configured": false, 00:17:33.238 "data_offset": 0, 00:17:33.238 "data_size": 63488 00:17:33.238 }, 00:17:33.238 { 00:17:33.238 "name": "BaseBdev2", 00:17:33.238 "uuid": "83251659-a97b-40da-971f-4c1b191ff62d", 00:17:33.238 "is_configured": true, 00:17:33.238 "data_offset": 2048, 00:17:33.238 "data_size": 63488 00:17:33.238 }, 00:17:33.238 { 00:17:33.238 "name": "BaseBdev3", 00:17:33.238 "uuid": "56cefdbf-77d9-404a-b672-78ce17885ff6", 00:17:33.238 "is_configured": true, 00:17:33.238 "data_offset": 2048, 00:17:33.238 "data_size": 63488 00:17:33.238 } 00:17:33.238 ] 00:17:33.238 }' 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.238 20:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.804 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.805 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:33.805 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.805 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.805 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.805 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:33.805 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.805 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:33.805 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.805 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.805 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.805 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1e16fc8b-c830-4dd4-8d65-fb7d2ddde222 00:17:33.805 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.805 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.063 [2024-10-17 20:14:19.456864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:34.063 [2024-10-17 20:14:19.457456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:34.063 [2024-10-17 20:14:19.457488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:34.063 NewBaseBdev 00:17:34.063 [2024-10-17 20:14:19.457846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:34.063 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.063 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:34.063 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:34.063 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:34.063 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:34.063 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:34.063 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:34.063 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:34.063 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.063 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.063 [2024-10-17 20:14:19.463057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:34.063 [2024-10-17 20:14:19.463082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:34.063 [2024-10-17 20:14:19.463494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.063 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.063 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:34.063 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.064 [ 00:17:34.064 { 00:17:34.064 "name": "NewBaseBdev", 00:17:34.064 "aliases": [ 00:17:34.064 "1e16fc8b-c830-4dd4-8d65-fb7d2ddde222" 00:17:34.064 ], 00:17:34.064 "product_name": "Malloc disk", 00:17:34.064 "block_size": 512, 00:17:34.064 "num_blocks": 65536, 00:17:34.064 "uuid": "1e16fc8b-c830-4dd4-8d65-fb7d2ddde222", 00:17:34.064 "assigned_rate_limits": { 00:17:34.064 "rw_ios_per_sec": 0, 00:17:34.064 "rw_mbytes_per_sec": 0, 00:17:34.064 "r_mbytes_per_sec": 0, 00:17:34.064 "w_mbytes_per_sec": 0 00:17:34.064 }, 00:17:34.064 "claimed": true, 00:17:34.064 "claim_type": "exclusive_write", 00:17:34.064 "zoned": false, 00:17:34.064 "supported_io_types": { 00:17:34.064 "read": true, 00:17:34.064 "write": true, 00:17:34.064 "unmap": true, 00:17:34.064 "flush": true, 00:17:34.064 "reset": true, 00:17:34.064 "nvme_admin": false, 00:17:34.064 "nvme_io": false, 00:17:34.064 "nvme_io_md": false, 00:17:34.064 "write_zeroes": true, 00:17:34.064 "zcopy": true, 00:17:34.064 "get_zone_info": false, 00:17:34.064 "zone_management": false, 00:17:34.064 "zone_append": false, 00:17:34.064 "compare": false, 00:17:34.064 "compare_and_write": false, 00:17:34.064 "abort": true, 00:17:34.064 "seek_hole": false, 00:17:34.064 "seek_data": false, 00:17:34.064 "copy": true, 00:17:34.064 "nvme_iov_md": false 00:17:34.064 }, 00:17:34.064 "memory_domains": [ 00:17:34.064 { 00:17:34.064 "dma_device_id": "system", 00:17:34.064 "dma_device_type": 1 00:17:34.064 }, 00:17:34.064 { 00:17:34.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.064 "dma_device_type": 2 00:17:34.064 } 00:17:34.064 ], 00:17:34.064 "driver_specific": {} 00:17:34.064 } 00:17:34.064 ] 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.064 "name": "Existed_Raid", 00:17:34.064 "uuid": "d05ebdfe-757d-4ed4-9750-5a004c9bfaee", 00:17:34.064 "strip_size_kb": 64, 00:17:34.064 "state": "online", 00:17:34.064 "raid_level": "raid5f", 00:17:34.064 "superblock": true, 00:17:34.064 "num_base_bdevs": 3, 00:17:34.064 "num_base_bdevs_discovered": 3, 00:17:34.064 "num_base_bdevs_operational": 3, 00:17:34.064 "base_bdevs_list": [ 00:17:34.064 { 00:17:34.064 "name": "NewBaseBdev", 00:17:34.064 "uuid": "1e16fc8b-c830-4dd4-8d65-fb7d2ddde222", 00:17:34.064 "is_configured": true, 00:17:34.064 "data_offset": 2048, 00:17:34.064 "data_size": 63488 00:17:34.064 }, 00:17:34.064 { 00:17:34.064 "name": "BaseBdev2", 00:17:34.064 "uuid": "83251659-a97b-40da-971f-4c1b191ff62d", 00:17:34.064 "is_configured": true, 00:17:34.064 "data_offset": 2048, 00:17:34.064 "data_size": 63488 00:17:34.064 }, 00:17:34.064 { 00:17:34.064 "name": "BaseBdev3", 00:17:34.064 "uuid": "56cefdbf-77d9-404a-b672-78ce17885ff6", 00:17:34.064 "is_configured": true, 00:17:34.064 "data_offset": 2048, 00:17:34.064 "data_size": 63488 00:17:34.064 } 00:17:34.064 ] 00:17:34.064 }' 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.064 20:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.632 [2024-10-17 20:14:20.042218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:34.632 "name": "Existed_Raid", 00:17:34.632 "aliases": [ 00:17:34.632 "d05ebdfe-757d-4ed4-9750-5a004c9bfaee" 00:17:34.632 ], 00:17:34.632 "product_name": "Raid Volume", 00:17:34.632 "block_size": 512, 00:17:34.632 "num_blocks": 126976, 00:17:34.632 "uuid": "d05ebdfe-757d-4ed4-9750-5a004c9bfaee", 00:17:34.632 "assigned_rate_limits": { 00:17:34.632 "rw_ios_per_sec": 0, 00:17:34.632 "rw_mbytes_per_sec": 0, 00:17:34.632 "r_mbytes_per_sec": 0, 00:17:34.632 "w_mbytes_per_sec": 0 00:17:34.632 }, 00:17:34.632 "claimed": false, 00:17:34.632 "zoned": false, 00:17:34.632 "supported_io_types": { 00:17:34.632 "read": true, 00:17:34.632 "write": true, 00:17:34.632 "unmap": false, 00:17:34.632 "flush": false, 00:17:34.632 "reset": true, 00:17:34.632 "nvme_admin": false, 00:17:34.632 "nvme_io": false, 00:17:34.632 "nvme_io_md": false, 00:17:34.632 "write_zeroes": true, 00:17:34.632 "zcopy": false, 00:17:34.632 "get_zone_info": false, 00:17:34.632 "zone_management": false, 00:17:34.632 "zone_append": false, 00:17:34.632 "compare": false, 00:17:34.632 "compare_and_write": false, 00:17:34.632 "abort": false, 00:17:34.632 "seek_hole": false, 00:17:34.632 "seek_data": false, 00:17:34.632 "copy": false, 00:17:34.632 "nvme_iov_md": false 00:17:34.632 }, 00:17:34.632 "driver_specific": { 00:17:34.632 "raid": { 00:17:34.632 "uuid": "d05ebdfe-757d-4ed4-9750-5a004c9bfaee", 00:17:34.632 "strip_size_kb": 64, 00:17:34.632 "state": "online", 00:17:34.632 "raid_level": "raid5f", 00:17:34.632 "superblock": true, 00:17:34.632 "num_base_bdevs": 3, 00:17:34.632 "num_base_bdevs_discovered": 3, 00:17:34.632 "num_base_bdevs_operational": 3, 00:17:34.632 "base_bdevs_list": [ 00:17:34.632 { 00:17:34.632 "name": "NewBaseBdev", 00:17:34.632 "uuid": "1e16fc8b-c830-4dd4-8d65-fb7d2ddde222", 00:17:34.632 "is_configured": true, 00:17:34.632 "data_offset": 2048, 00:17:34.632 "data_size": 63488 00:17:34.632 }, 00:17:34.632 { 00:17:34.632 "name": "BaseBdev2", 00:17:34.632 "uuid": "83251659-a97b-40da-971f-4c1b191ff62d", 00:17:34.632 "is_configured": true, 00:17:34.632 "data_offset": 2048, 00:17:34.632 "data_size": 63488 00:17:34.632 }, 00:17:34.632 { 00:17:34.632 "name": "BaseBdev3", 00:17:34.632 "uuid": "56cefdbf-77d9-404a-b672-78ce17885ff6", 00:17:34.632 "is_configured": true, 00:17:34.632 "data_offset": 2048, 00:17:34.632 "data_size": 63488 00:17:34.632 } 00:17:34.632 ] 00:17:34.632 } 00:17:34.632 } 00:17:34.632 }' 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:34.632 BaseBdev2 00:17:34.632 BaseBdev3' 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.632 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.633 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.633 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.633 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.633 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.633 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.633 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:34.633 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.633 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.633 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.633 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.892 [2024-10-17 20:14:20.378020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:34.892 [2024-10-17 20:14:20.378070] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.892 [2024-10-17 20:14:20.378181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.892 [2024-10-17 20:14:20.378555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.892 [2024-10-17 20:14:20.378577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80751 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80751 ']' 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80751 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80751 00:17:34.892 killing process with pid 80751 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80751' 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80751 00:17:34.892 [2024-10-17 20:14:20.416416] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.892 20:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80751 00:17:35.152 [2024-10-17 20:14:20.668676] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:36.089 20:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:36.089 00:17:36.089 real 0m11.987s 00:17:36.089 user 0m19.950s 00:17:36.089 sys 0m1.715s 00:17:36.089 20:14:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:36.089 20:14:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.089 ************************************ 00:17:36.089 END TEST raid5f_state_function_test_sb 00:17:36.089 ************************************ 00:17:36.089 20:14:21 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:17:36.089 20:14:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:36.089 20:14:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:36.089 20:14:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:36.089 ************************************ 00:17:36.089 START TEST raid5f_superblock_test 00:17:36.089 ************************************ 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81386 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81386 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81386 ']' 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:36.089 20:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.348 [2024-10-17 20:14:21.827304] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:17:36.348 [2024-10-17 20:14:21.827737] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81386 ] 00:17:36.606 [2024-10-17 20:14:22.004269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.606 [2024-10-17 20:14:22.132020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.865 [2024-10-17 20:14:22.331156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.865 [2024-10-17 20:14:22.331212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.433 malloc1 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.433 [2024-10-17 20:14:22.871106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:37.433 [2024-10-17 20:14:22.871370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.433 [2024-10-17 20:14:22.871415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:37.433 [2024-10-17 20:14:22.871432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.433 [2024-10-17 20:14:22.874224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.433 [2024-10-17 20:14:22.874267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:37.433 pt1 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.433 malloc2 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.433 [2024-10-17 20:14:22.920249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:37.433 [2024-10-17 20:14:22.920328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.433 [2024-10-17 20:14:22.920359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:37.433 [2024-10-17 20:14:22.920373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.433 [2024-10-17 20:14:22.923067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.433 [2024-10-17 20:14:22.923106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:37.433 pt2 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.433 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.434 malloc3 00:17:37.434 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.434 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:37.434 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.434 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.434 [2024-10-17 20:14:22.981268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:37.434 [2024-10-17 20:14:22.981349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.434 [2024-10-17 20:14:22.981397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:37.434 [2024-10-17 20:14:22.981426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.434 [2024-10-17 20:14:22.983947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.434 [2024-10-17 20:14:22.984202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:37.434 pt3 00:17:37.434 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.434 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:37.434 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:37.434 20:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:17:37.434 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.434 20:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.434 [2024-10-17 20:14:22.993323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:37.434 [2024-10-17 20:14:22.995649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.434 [2024-10-17 20:14:22.995739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:37.434 [2024-10-17 20:14:22.995943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:37.434 [2024-10-17 20:14:22.995963] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:37.434 [2024-10-17 20:14:22.996320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:37.434 [2024-10-17 20:14:23.001148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:37.434 [2024-10-17 20:14:23.001171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:37.434 [2024-10-17 20:14:23.001374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.434 "name": "raid_bdev1", 00:17:37.434 "uuid": "ca298f42-f42c-44d6-832c-312b8bb705e3", 00:17:37.434 "strip_size_kb": 64, 00:17:37.434 "state": "online", 00:17:37.434 "raid_level": "raid5f", 00:17:37.434 "superblock": true, 00:17:37.434 "num_base_bdevs": 3, 00:17:37.434 "num_base_bdevs_discovered": 3, 00:17:37.434 "num_base_bdevs_operational": 3, 00:17:37.434 "base_bdevs_list": [ 00:17:37.434 { 00:17:37.434 "name": "pt1", 00:17:37.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:37.434 "is_configured": true, 00:17:37.434 "data_offset": 2048, 00:17:37.434 "data_size": 63488 00:17:37.434 }, 00:17:37.434 { 00:17:37.434 "name": "pt2", 00:17:37.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.434 "is_configured": true, 00:17:37.434 "data_offset": 2048, 00:17:37.434 "data_size": 63488 00:17:37.434 }, 00:17:37.434 { 00:17:37.434 "name": "pt3", 00:17:37.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:37.434 "is_configured": true, 00:17:37.434 "data_offset": 2048, 00:17:37.434 "data_size": 63488 00:17:37.434 } 00:17:37.434 ] 00:17:37.434 }' 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.434 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:38.001 [2024-10-17 20:14:23.507319] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:38.001 "name": "raid_bdev1", 00:17:38.001 "aliases": [ 00:17:38.001 "ca298f42-f42c-44d6-832c-312b8bb705e3" 00:17:38.001 ], 00:17:38.001 "product_name": "Raid Volume", 00:17:38.001 "block_size": 512, 00:17:38.001 "num_blocks": 126976, 00:17:38.001 "uuid": "ca298f42-f42c-44d6-832c-312b8bb705e3", 00:17:38.001 "assigned_rate_limits": { 00:17:38.001 "rw_ios_per_sec": 0, 00:17:38.001 "rw_mbytes_per_sec": 0, 00:17:38.001 "r_mbytes_per_sec": 0, 00:17:38.001 "w_mbytes_per_sec": 0 00:17:38.001 }, 00:17:38.001 "claimed": false, 00:17:38.001 "zoned": false, 00:17:38.001 "supported_io_types": { 00:17:38.001 "read": true, 00:17:38.001 "write": true, 00:17:38.001 "unmap": false, 00:17:38.001 "flush": false, 00:17:38.001 "reset": true, 00:17:38.001 "nvme_admin": false, 00:17:38.001 "nvme_io": false, 00:17:38.001 "nvme_io_md": false, 00:17:38.001 "write_zeroes": true, 00:17:38.001 "zcopy": false, 00:17:38.001 "get_zone_info": false, 00:17:38.001 "zone_management": false, 00:17:38.001 "zone_append": false, 00:17:38.001 "compare": false, 00:17:38.001 "compare_and_write": false, 00:17:38.001 "abort": false, 00:17:38.001 "seek_hole": false, 00:17:38.001 "seek_data": false, 00:17:38.001 "copy": false, 00:17:38.001 "nvme_iov_md": false 00:17:38.001 }, 00:17:38.001 "driver_specific": { 00:17:38.001 "raid": { 00:17:38.001 "uuid": "ca298f42-f42c-44d6-832c-312b8bb705e3", 00:17:38.001 "strip_size_kb": 64, 00:17:38.001 "state": "online", 00:17:38.001 "raid_level": "raid5f", 00:17:38.001 "superblock": true, 00:17:38.001 "num_base_bdevs": 3, 00:17:38.001 "num_base_bdevs_discovered": 3, 00:17:38.001 "num_base_bdevs_operational": 3, 00:17:38.001 "base_bdevs_list": [ 00:17:38.001 { 00:17:38.001 "name": "pt1", 00:17:38.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:38.001 "is_configured": true, 00:17:38.001 "data_offset": 2048, 00:17:38.001 "data_size": 63488 00:17:38.001 }, 00:17:38.001 { 00:17:38.001 "name": "pt2", 00:17:38.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.001 "is_configured": true, 00:17:38.001 "data_offset": 2048, 00:17:38.001 "data_size": 63488 00:17:38.001 }, 00:17:38.001 { 00:17:38.001 "name": "pt3", 00:17:38.001 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:38.001 "is_configured": true, 00:17:38.001 "data_offset": 2048, 00:17:38.001 "data_size": 63488 00:17:38.001 } 00:17:38.001 ] 00:17:38.001 } 00:17:38.001 } 00:17:38.001 }' 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:38.001 pt2 00:17:38.001 pt3' 00:17:38.001 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.259 [2024-10-17 20:14:23.843430] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ca298f42-f42c-44d6-832c-312b8bb705e3 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ca298f42-f42c-44d6-832c-312b8bb705e3 ']' 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.259 [2024-10-17 20:14:23.887136] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.259 [2024-10-17 20:14:23.887167] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.259 [2024-10-17 20:14:23.887261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.259 [2024-10-17 20:14:23.887371] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.259 [2024-10-17 20:14:23.887416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.259 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.518 20:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.518 [2024-10-17 20:14:24.043259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:38.518 [2024-10-17 20:14:24.046048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:38.518 [2024-10-17 20:14:24.046139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:38.518 [2024-10-17 20:14:24.046213] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:38.518 [2024-10-17 20:14:24.046281] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:38.518 [2024-10-17 20:14:24.046313] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:38.518 [2024-10-17 20:14:24.046338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.518 [2024-10-17 20:14:24.046351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:38.518 request: 00:17:38.518 { 00:17:38.518 "name": "raid_bdev1", 00:17:38.518 "raid_level": "raid5f", 00:17:38.518 "base_bdevs": [ 00:17:38.518 "malloc1", 00:17:38.518 "malloc2", 00:17:38.518 "malloc3" 00:17:38.518 ], 00:17:38.518 "strip_size_kb": 64, 00:17:38.518 "superblock": false, 00:17:38.518 "method": "bdev_raid_create", 00:17:38.518 "req_id": 1 00:17:38.518 } 00:17:38.518 Got JSON-RPC error response 00:17:38.518 response: 00:17:38.518 { 00:17:38.518 "code": -17, 00:17:38.518 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:38.518 } 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.518 [2024-10-17 20:14:24.111175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:38.518 [2024-10-17 20:14:24.111386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.518 [2024-10-17 20:14:24.111454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:38.518 [2024-10-17 20:14:24.111667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.518 [2024-10-17 20:14:24.114606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.518 [2024-10-17 20:14:24.114767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:38.518 [2024-10-17 20:14:24.114972] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:38.518 [2024-10-17 20:14:24.115192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:38.518 pt1 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.518 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.776 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.776 "name": "raid_bdev1", 00:17:38.776 "uuid": "ca298f42-f42c-44d6-832c-312b8bb705e3", 00:17:38.776 "strip_size_kb": 64, 00:17:38.776 "state": "configuring", 00:17:38.776 "raid_level": "raid5f", 00:17:38.776 "superblock": true, 00:17:38.776 "num_base_bdevs": 3, 00:17:38.776 "num_base_bdevs_discovered": 1, 00:17:38.776 "num_base_bdevs_operational": 3, 00:17:38.776 "base_bdevs_list": [ 00:17:38.776 { 00:17:38.776 "name": "pt1", 00:17:38.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:38.776 "is_configured": true, 00:17:38.776 "data_offset": 2048, 00:17:38.776 "data_size": 63488 00:17:38.776 }, 00:17:38.776 { 00:17:38.776 "name": null, 00:17:38.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.776 "is_configured": false, 00:17:38.776 "data_offset": 2048, 00:17:38.776 "data_size": 63488 00:17:38.776 }, 00:17:38.776 { 00:17:38.776 "name": null, 00:17:38.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:38.776 "is_configured": false, 00:17:38.776 "data_offset": 2048, 00:17:38.776 "data_size": 63488 00:17:38.776 } 00:17:38.776 ] 00:17:38.776 }' 00:17:38.776 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.776 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.034 [2024-10-17 20:14:24.647780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:39.034 [2024-10-17 20:14:24.647871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.034 [2024-10-17 20:14:24.647907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:39.034 [2024-10-17 20:14:24.647922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.034 [2024-10-17 20:14:24.648511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.034 [2024-10-17 20:14:24.648561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:39.034 [2024-10-17 20:14:24.648671] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:39.034 [2024-10-17 20:14:24.648703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:39.034 pt2 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.034 [2024-10-17 20:14:24.655760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.034 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.293 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.293 "name": "raid_bdev1", 00:17:39.293 "uuid": "ca298f42-f42c-44d6-832c-312b8bb705e3", 00:17:39.293 "strip_size_kb": 64, 00:17:39.293 "state": "configuring", 00:17:39.293 "raid_level": "raid5f", 00:17:39.293 "superblock": true, 00:17:39.293 "num_base_bdevs": 3, 00:17:39.293 "num_base_bdevs_discovered": 1, 00:17:39.293 "num_base_bdevs_operational": 3, 00:17:39.293 "base_bdevs_list": [ 00:17:39.293 { 00:17:39.293 "name": "pt1", 00:17:39.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:39.293 "is_configured": true, 00:17:39.293 "data_offset": 2048, 00:17:39.293 "data_size": 63488 00:17:39.293 }, 00:17:39.293 { 00:17:39.293 "name": null, 00:17:39.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.293 "is_configured": false, 00:17:39.293 "data_offset": 0, 00:17:39.293 "data_size": 63488 00:17:39.293 }, 00:17:39.293 { 00:17:39.293 "name": null, 00:17:39.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:39.293 "is_configured": false, 00:17:39.293 "data_offset": 2048, 00:17:39.293 "data_size": 63488 00:17:39.293 } 00:17:39.293 ] 00:17:39.293 }' 00:17:39.293 20:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.293 20:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.558 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:39.558 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:39.558 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:39.558 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.558 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.558 [2024-10-17 20:14:25.183882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:39.558 [2024-10-17 20:14:25.183975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.558 [2024-10-17 20:14:25.184019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:39.558 [2024-10-17 20:14:25.184070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.558 [2024-10-17 20:14:25.184696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.558 [2024-10-17 20:14:25.184725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:39.558 [2024-10-17 20:14:25.184818] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:39.558 [2024-10-17 20:14:25.184853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:39.558 pt2 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.559 [2024-10-17 20:14:25.195878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:39.559 [2024-10-17 20:14:25.195946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.559 [2024-10-17 20:14:25.195965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:39.559 [2024-10-17 20:14:25.195981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.559 [2024-10-17 20:14:25.196505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.559 [2024-10-17 20:14:25.196564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:39.559 [2024-10-17 20:14:25.196637] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:39.559 [2024-10-17 20:14:25.196668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:39.559 [2024-10-17 20:14:25.196845] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:39.559 [2024-10-17 20:14:25.196865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:39.559 [2024-10-17 20:14:25.197223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:39.559 [2024-10-17 20:14:25.201966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:39.559 [2024-10-17 20:14:25.201989] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:39.559 [2024-10-17 20:14:25.202216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.559 pt3 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.559 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.817 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.817 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.817 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.817 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.817 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.817 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.817 "name": "raid_bdev1", 00:17:39.817 "uuid": "ca298f42-f42c-44d6-832c-312b8bb705e3", 00:17:39.817 "strip_size_kb": 64, 00:17:39.817 "state": "online", 00:17:39.817 "raid_level": "raid5f", 00:17:39.817 "superblock": true, 00:17:39.817 "num_base_bdevs": 3, 00:17:39.817 "num_base_bdevs_discovered": 3, 00:17:39.817 "num_base_bdevs_operational": 3, 00:17:39.817 "base_bdevs_list": [ 00:17:39.817 { 00:17:39.817 "name": "pt1", 00:17:39.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:39.817 "is_configured": true, 00:17:39.817 "data_offset": 2048, 00:17:39.817 "data_size": 63488 00:17:39.817 }, 00:17:39.817 { 00:17:39.817 "name": "pt2", 00:17:39.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.817 "is_configured": true, 00:17:39.817 "data_offset": 2048, 00:17:39.817 "data_size": 63488 00:17:39.817 }, 00:17:39.817 { 00:17:39.817 "name": "pt3", 00:17:39.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:39.817 "is_configured": true, 00:17:39.817 "data_offset": 2048, 00:17:39.817 "data_size": 63488 00:17:39.817 } 00:17:39.817 ] 00:17:39.817 }' 00:17:39.817 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.817 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.384 [2024-10-17 20:14:25.740279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:40.384 "name": "raid_bdev1", 00:17:40.384 "aliases": [ 00:17:40.384 "ca298f42-f42c-44d6-832c-312b8bb705e3" 00:17:40.384 ], 00:17:40.384 "product_name": "Raid Volume", 00:17:40.384 "block_size": 512, 00:17:40.384 "num_blocks": 126976, 00:17:40.384 "uuid": "ca298f42-f42c-44d6-832c-312b8bb705e3", 00:17:40.384 "assigned_rate_limits": { 00:17:40.384 "rw_ios_per_sec": 0, 00:17:40.384 "rw_mbytes_per_sec": 0, 00:17:40.384 "r_mbytes_per_sec": 0, 00:17:40.384 "w_mbytes_per_sec": 0 00:17:40.384 }, 00:17:40.384 "claimed": false, 00:17:40.384 "zoned": false, 00:17:40.384 "supported_io_types": { 00:17:40.384 "read": true, 00:17:40.384 "write": true, 00:17:40.384 "unmap": false, 00:17:40.384 "flush": false, 00:17:40.384 "reset": true, 00:17:40.384 "nvme_admin": false, 00:17:40.384 "nvme_io": false, 00:17:40.384 "nvme_io_md": false, 00:17:40.384 "write_zeroes": true, 00:17:40.384 "zcopy": false, 00:17:40.384 "get_zone_info": false, 00:17:40.384 "zone_management": false, 00:17:40.384 "zone_append": false, 00:17:40.384 "compare": false, 00:17:40.384 "compare_and_write": false, 00:17:40.384 "abort": false, 00:17:40.384 "seek_hole": false, 00:17:40.384 "seek_data": false, 00:17:40.384 "copy": false, 00:17:40.384 "nvme_iov_md": false 00:17:40.384 }, 00:17:40.384 "driver_specific": { 00:17:40.384 "raid": { 00:17:40.384 "uuid": "ca298f42-f42c-44d6-832c-312b8bb705e3", 00:17:40.384 "strip_size_kb": 64, 00:17:40.384 "state": "online", 00:17:40.384 "raid_level": "raid5f", 00:17:40.384 "superblock": true, 00:17:40.384 "num_base_bdevs": 3, 00:17:40.384 "num_base_bdevs_discovered": 3, 00:17:40.384 "num_base_bdevs_operational": 3, 00:17:40.384 "base_bdevs_list": [ 00:17:40.384 { 00:17:40.384 "name": "pt1", 00:17:40.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:40.384 "is_configured": true, 00:17:40.384 "data_offset": 2048, 00:17:40.384 "data_size": 63488 00:17:40.384 }, 00:17:40.384 { 00:17:40.384 "name": "pt2", 00:17:40.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.384 "is_configured": true, 00:17:40.384 "data_offset": 2048, 00:17:40.384 "data_size": 63488 00:17:40.384 }, 00:17:40.384 { 00:17:40.384 "name": "pt3", 00:17:40.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:40.384 "is_configured": true, 00:17:40.384 "data_offset": 2048, 00:17:40.384 "data_size": 63488 00:17:40.384 } 00:17:40.384 ] 00:17:40.384 } 00:17:40.384 } 00:17:40.384 }' 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:40.384 pt2 00:17:40.384 pt3' 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.384 20:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.384 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.384 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.384 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.384 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:40.384 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.384 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.384 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.384 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.643 [2024-10-17 20:14:26.072289] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ca298f42-f42c-44d6-832c-312b8bb705e3 '!=' ca298f42-f42c-44d6-832c-312b8bb705e3 ']' 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.643 [2024-10-17 20:14:26.120049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.643 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.643 "name": "raid_bdev1", 00:17:40.643 "uuid": "ca298f42-f42c-44d6-832c-312b8bb705e3", 00:17:40.643 "strip_size_kb": 64, 00:17:40.643 "state": "online", 00:17:40.643 "raid_level": "raid5f", 00:17:40.643 "superblock": true, 00:17:40.643 "num_base_bdevs": 3, 00:17:40.643 "num_base_bdevs_discovered": 2, 00:17:40.643 "num_base_bdevs_operational": 2, 00:17:40.643 "base_bdevs_list": [ 00:17:40.643 { 00:17:40.643 "name": null, 00:17:40.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.643 "is_configured": false, 00:17:40.643 "data_offset": 0, 00:17:40.643 "data_size": 63488 00:17:40.643 }, 00:17:40.643 { 00:17:40.643 "name": "pt2", 00:17:40.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.643 "is_configured": true, 00:17:40.644 "data_offset": 2048, 00:17:40.644 "data_size": 63488 00:17:40.644 }, 00:17:40.644 { 00:17:40.644 "name": "pt3", 00:17:40.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:40.644 "is_configured": true, 00:17:40.644 "data_offset": 2048, 00:17:40.644 "data_size": 63488 00:17:40.644 } 00:17:40.644 ] 00:17:40.644 }' 00:17:40.644 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.644 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 [2024-10-17 20:14:26.668228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.211 [2024-10-17 20:14:26.668393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.211 [2024-10-17 20:14:26.668511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.211 [2024-10-17 20:14:26.668590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.211 [2024-10-17 20:14:26.668623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 [2024-10-17 20:14:26.752200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:41.211 [2024-10-17 20:14:26.752281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.211 [2024-10-17 20:14:26.752307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:41.211 [2024-10-17 20:14:26.752328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.211 [2024-10-17 20:14:26.755278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.211 [2024-10-17 20:14:26.755330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:41.211 [2024-10-17 20:14:26.755420] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:41.211 [2024-10-17 20:14:26.755487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:41.211 pt2 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.211 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.212 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.212 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.212 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.212 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.212 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.212 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.212 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.212 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.212 "name": "raid_bdev1", 00:17:41.212 "uuid": "ca298f42-f42c-44d6-832c-312b8bb705e3", 00:17:41.212 "strip_size_kb": 64, 00:17:41.212 "state": "configuring", 00:17:41.212 "raid_level": "raid5f", 00:17:41.212 "superblock": true, 00:17:41.212 "num_base_bdevs": 3, 00:17:41.212 "num_base_bdevs_discovered": 1, 00:17:41.212 "num_base_bdevs_operational": 2, 00:17:41.212 "base_bdevs_list": [ 00:17:41.212 { 00:17:41.212 "name": null, 00:17:41.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.212 "is_configured": false, 00:17:41.212 "data_offset": 2048, 00:17:41.212 "data_size": 63488 00:17:41.212 }, 00:17:41.212 { 00:17:41.212 "name": "pt2", 00:17:41.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.212 "is_configured": true, 00:17:41.212 "data_offset": 2048, 00:17:41.212 "data_size": 63488 00:17:41.212 }, 00:17:41.212 { 00:17:41.212 "name": null, 00:17:41.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:41.212 "is_configured": false, 00:17:41.212 "data_offset": 2048, 00:17:41.212 "data_size": 63488 00:17:41.212 } 00:17:41.212 ] 00:17:41.212 }' 00:17:41.212 20:14:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.212 20:14:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.778 [2024-10-17 20:14:27.296420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:41.778 [2024-10-17 20:14:27.297077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.778 [2024-10-17 20:14:27.297131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:41.778 [2024-10-17 20:14:27.297152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.778 [2024-10-17 20:14:27.297833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.778 [2024-10-17 20:14:27.297862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:41.778 [2024-10-17 20:14:27.297974] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:41.778 [2024-10-17 20:14:27.298057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:41.778 [2024-10-17 20:14:27.298689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:41.778 [2024-10-17 20:14:27.298753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:41.778 [2024-10-17 20:14:27.299216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:41.778 [2024-10-17 20:14:27.304567] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:41.778 [2024-10-17 20:14:27.304759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:41.778 [2024-10-17 20:14:27.305361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.778 pt3 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.778 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.779 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.779 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.779 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.779 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.779 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.779 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.779 "name": "raid_bdev1", 00:17:41.779 "uuid": "ca298f42-f42c-44d6-832c-312b8bb705e3", 00:17:41.779 "strip_size_kb": 64, 00:17:41.779 "state": "online", 00:17:41.779 "raid_level": "raid5f", 00:17:41.779 "superblock": true, 00:17:41.779 "num_base_bdevs": 3, 00:17:41.779 "num_base_bdevs_discovered": 2, 00:17:41.779 "num_base_bdevs_operational": 2, 00:17:41.779 "base_bdevs_list": [ 00:17:41.779 { 00:17:41.779 "name": null, 00:17:41.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.779 "is_configured": false, 00:17:41.779 "data_offset": 2048, 00:17:41.779 "data_size": 63488 00:17:41.779 }, 00:17:41.779 { 00:17:41.779 "name": "pt2", 00:17:41.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.779 "is_configured": true, 00:17:41.779 "data_offset": 2048, 00:17:41.779 "data_size": 63488 00:17:41.779 }, 00:17:41.779 { 00:17:41.779 "name": "pt3", 00:17:41.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:41.779 "is_configured": true, 00:17:41.779 "data_offset": 2048, 00:17:41.779 "data_size": 63488 00:17:41.779 } 00:17:41.779 ] 00:17:41.779 }' 00:17:41.779 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.779 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.348 [2024-10-17 20:14:27.851433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.348 [2024-10-17 20:14:27.851619] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.348 [2024-10-17 20:14:27.851732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.348 [2024-10-17 20:14:27.851815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.348 [2024-10-17 20:14:27.851830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.348 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.348 [2024-10-17 20:14:27.919451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:42.348 [2024-10-17 20:14:27.920071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.348 [2024-10-17 20:14:27.920221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:42.348 [2024-10-17 20:14:27.920329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.348 [2024-10-17 20:14:27.923350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.348 [2024-10-17 20:14:27.923682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:42.348 [2024-10-17 20:14:27.923804] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:42.349 [2024-10-17 20:14:27.923863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:42.349 [2024-10-17 20:14:27.924133] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:42.349 [2024-10-17 20:14:27.924192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.349 [2024-10-17 20:14:27.924231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:42.349 pt1 00:17:42.349 [2024-10-17 20:14:27.924301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.349 "name": "raid_bdev1", 00:17:42.349 "uuid": "ca298f42-f42c-44d6-832c-312b8bb705e3", 00:17:42.349 "strip_size_kb": 64, 00:17:42.349 "state": "configuring", 00:17:42.349 "raid_level": "raid5f", 00:17:42.349 "superblock": true, 00:17:42.349 "num_base_bdevs": 3, 00:17:42.349 "num_base_bdevs_discovered": 1, 00:17:42.349 "num_base_bdevs_operational": 2, 00:17:42.349 "base_bdevs_list": [ 00:17:42.349 { 00:17:42.349 "name": null, 00:17:42.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.349 "is_configured": false, 00:17:42.349 "data_offset": 2048, 00:17:42.349 "data_size": 63488 00:17:42.349 }, 00:17:42.349 { 00:17:42.349 "name": "pt2", 00:17:42.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.349 "is_configured": true, 00:17:42.349 "data_offset": 2048, 00:17:42.349 "data_size": 63488 00:17:42.349 }, 00:17:42.349 { 00:17:42.349 "name": null, 00:17:42.349 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:42.349 "is_configured": false, 00:17:42.349 "data_offset": 2048, 00:17:42.349 "data_size": 63488 00:17:42.349 } 00:17:42.349 ] 00:17:42.349 }' 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.349 20:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.915 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:42.915 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:42.915 20:14:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.915 20:14:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.915 20:14:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.915 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:42.915 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:42.915 20:14:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.915 20:14:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.915 [2024-10-17 20:14:28.520119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:42.915 [2024-10-17 20:14:28.520207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.915 [2024-10-17 20:14:28.520240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:42.915 [2024-10-17 20:14:28.520255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.915 [2024-10-17 20:14:28.520821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.915 [2024-10-17 20:14:28.520850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:42.916 [2024-10-17 20:14:28.520992] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:42.916 [2024-10-17 20:14:28.521055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:42.916 [2024-10-17 20:14:28.521233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:42.916 [2024-10-17 20:14:28.521256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:42.916 [2024-10-17 20:14:28.521597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:42.916 [2024-10-17 20:14:28.526389] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:42.916 [2024-10-17 20:14:28.526462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:42.916 [2024-10-17 20:14:28.526731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.916 pt3 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.916 20:14:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.174 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.174 "name": "raid_bdev1", 00:17:43.174 "uuid": "ca298f42-f42c-44d6-832c-312b8bb705e3", 00:17:43.174 "strip_size_kb": 64, 00:17:43.174 "state": "online", 00:17:43.174 "raid_level": "raid5f", 00:17:43.174 "superblock": true, 00:17:43.174 "num_base_bdevs": 3, 00:17:43.174 "num_base_bdevs_discovered": 2, 00:17:43.174 "num_base_bdevs_operational": 2, 00:17:43.174 "base_bdevs_list": [ 00:17:43.174 { 00:17:43.174 "name": null, 00:17:43.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.174 "is_configured": false, 00:17:43.174 "data_offset": 2048, 00:17:43.174 "data_size": 63488 00:17:43.174 }, 00:17:43.174 { 00:17:43.174 "name": "pt2", 00:17:43.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.174 "is_configured": true, 00:17:43.174 "data_offset": 2048, 00:17:43.174 "data_size": 63488 00:17:43.174 }, 00:17:43.174 { 00:17:43.174 "name": "pt3", 00:17:43.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:43.174 "is_configured": true, 00:17:43.174 "data_offset": 2048, 00:17:43.174 "data_size": 63488 00:17:43.174 } 00:17:43.174 ] 00:17:43.174 }' 00:17:43.174 20:14:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.174 20:14:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.433 20:14:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:43.433 20:14:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:43.433 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.433 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.433 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:43.692 [2024-10-17 20:14:29.100726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ca298f42-f42c-44d6-832c-312b8bb705e3 '!=' ca298f42-f42c-44d6-832c-312b8bb705e3 ']' 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81386 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81386 ']' 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81386 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81386 00:17:43.692 killing process with pid 81386 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81386' 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 81386 00:17:43.692 [2024-10-17 20:14:29.183413] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.692 20:14:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 81386 00:17:43.692 [2024-10-17 20:14:29.183514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.692 [2024-10-17 20:14:29.183586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.692 [2024-10-17 20:14:29.183604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:43.951 [2024-10-17 20:14:29.420714] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.885 20:14:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:44.885 00:17:44.885 real 0m8.633s 00:17:44.885 user 0m14.209s 00:17:44.885 sys 0m1.279s 00:17:44.885 20:14:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.885 20:14:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.885 ************************************ 00:17:44.885 END TEST raid5f_superblock_test 00:17:44.885 ************************************ 00:17:44.885 20:14:30 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:44.885 20:14:30 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:17:44.885 20:14:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:44.885 20:14:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:44.885 20:14:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:44.885 ************************************ 00:17:44.885 START TEST raid5f_rebuild_test 00:17:44.885 ************************************ 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81834 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81834 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 81834 ']' 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.885 20:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.885 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:44.885 Zero copy mechanism will not be used. 00:17:44.885 [2024-10-17 20:14:30.502873] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:17:44.885 [2024-10-17 20:14:30.503058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81834 ] 00:17:45.143 [2024-10-17 20:14:30.663379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.143 [2024-10-17 20:14:30.781532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.401 [2024-10-17 20:14:30.957372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.401 [2024-10-17 20:14:30.957424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.968 BaseBdev1_malloc 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.968 [2024-10-17 20:14:31.532577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:45.968 [2024-10-17 20:14:31.532795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.968 [2024-10-17 20:14:31.532839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:45.968 [2024-10-17 20:14:31.532860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.968 [2024-10-17 20:14:31.535805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.968 [2024-10-17 20:14:31.536065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:45.968 BaseBdev1 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.968 BaseBdev2_malloc 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.968 [2024-10-17 20:14:31.590011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:45.968 [2024-10-17 20:14:31.590132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.968 [2024-10-17 20:14:31.590163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:45.968 [2024-10-17 20:14:31.590181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.968 [2024-10-17 20:14:31.592924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.968 [2024-10-17 20:14:31.592975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:45.968 BaseBdev2 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.968 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.227 BaseBdev3_malloc 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.227 [2024-10-17 20:14:31.658656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:46.227 [2024-10-17 20:14:31.658756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.227 [2024-10-17 20:14:31.658788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:46.227 [2024-10-17 20:14:31.658807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.227 [2024-10-17 20:14:31.661794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.227 [2024-10-17 20:14:31.662025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:46.227 BaseBdev3 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.227 spare_malloc 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.227 spare_delay 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.227 [2024-10-17 20:14:31.725805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:46.227 [2024-10-17 20:14:31.725886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.227 [2024-10-17 20:14:31.725927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:46.227 [2024-10-17 20:14:31.725943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.227 [2024-10-17 20:14:31.729043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.227 [2024-10-17 20:14:31.729264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:46.227 spare 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.227 [2024-10-17 20:14:31.737916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.227 [2024-10-17 20:14:31.740590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.227 [2024-10-17 20:14:31.740858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:46.227 [2024-10-17 20:14:31.740984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:46.227 [2024-10-17 20:14:31.741002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:46.227 [2024-10-17 20:14:31.741389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:46.227 [2024-10-17 20:14:31.746032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:46.227 [2024-10-17 20:14:31.746059] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:46.227 [2024-10-17 20:14:31.746313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.227 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.227 "name": "raid_bdev1", 00:17:46.227 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:46.227 "strip_size_kb": 64, 00:17:46.227 "state": "online", 00:17:46.228 "raid_level": "raid5f", 00:17:46.228 "superblock": false, 00:17:46.228 "num_base_bdevs": 3, 00:17:46.228 "num_base_bdevs_discovered": 3, 00:17:46.228 "num_base_bdevs_operational": 3, 00:17:46.228 "base_bdevs_list": [ 00:17:46.228 { 00:17:46.228 "name": "BaseBdev1", 00:17:46.228 "uuid": "014cf8e6-e283-5dee-8abf-99a3dee8021d", 00:17:46.228 "is_configured": true, 00:17:46.228 "data_offset": 0, 00:17:46.228 "data_size": 65536 00:17:46.228 }, 00:17:46.228 { 00:17:46.228 "name": "BaseBdev2", 00:17:46.228 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:46.228 "is_configured": true, 00:17:46.228 "data_offset": 0, 00:17:46.228 "data_size": 65536 00:17:46.228 }, 00:17:46.228 { 00:17:46.228 "name": "BaseBdev3", 00:17:46.228 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:46.228 "is_configured": true, 00:17:46.228 "data_offset": 0, 00:17:46.228 "data_size": 65536 00:17:46.228 } 00:17:46.228 ] 00:17:46.228 }' 00:17:46.228 20:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.228 20:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.793 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:46.793 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:46.793 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.793 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.793 [2024-10-17 20:14:32.284336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.793 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.793 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.794 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:47.052 [2024-10-17 20:14:32.672264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:47.052 /dev/nbd0 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.311 1+0 records in 00:17:47.311 1+0 records out 00:17:47.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494989 s, 8.3 MB/s 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:47.311 20:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:17:47.569 512+0 records in 00:17:47.569 512+0 records out 00:17:47.569 67108864 bytes (67 MB, 64 MiB) copied, 0.394828 s, 170 MB/s 00:17:47.569 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:47.569 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.570 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:47.570 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:47.570 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:47.570 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.570 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:47.829 [2024-10-17 20:14:33.439548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.829 [2024-10-17 20:14:33.469718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.829 20:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.086 20:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.086 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.086 "name": "raid_bdev1", 00:17:48.086 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:48.086 "strip_size_kb": 64, 00:17:48.086 "state": "online", 00:17:48.086 "raid_level": "raid5f", 00:17:48.086 "superblock": false, 00:17:48.086 "num_base_bdevs": 3, 00:17:48.086 "num_base_bdevs_discovered": 2, 00:17:48.086 "num_base_bdevs_operational": 2, 00:17:48.086 "base_bdevs_list": [ 00:17:48.086 { 00:17:48.086 "name": null, 00:17:48.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.086 "is_configured": false, 00:17:48.086 "data_offset": 0, 00:17:48.086 "data_size": 65536 00:17:48.086 }, 00:17:48.086 { 00:17:48.086 "name": "BaseBdev2", 00:17:48.086 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:48.086 "is_configured": true, 00:17:48.086 "data_offset": 0, 00:17:48.086 "data_size": 65536 00:17:48.086 }, 00:17:48.086 { 00:17:48.086 "name": "BaseBdev3", 00:17:48.086 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:48.086 "is_configured": true, 00:17:48.086 "data_offset": 0, 00:17:48.086 "data_size": 65536 00:17:48.086 } 00:17:48.086 ] 00:17:48.086 }' 00:17:48.086 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.086 20:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.651 20:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:48.651 20:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.651 20:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.651 [2024-10-17 20:14:34.001931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:48.651 [2024-10-17 20:14:34.017724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:48.651 20:14:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.651 20:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:48.651 [2024-10-17 20:14:34.025155] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:49.583 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.583 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.584 "name": "raid_bdev1", 00:17:49.584 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:49.584 "strip_size_kb": 64, 00:17:49.584 "state": "online", 00:17:49.584 "raid_level": "raid5f", 00:17:49.584 "superblock": false, 00:17:49.584 "num_base_bdevs": 3, 00:17:49.584 "num_base_bdevs_discovered": 3, 00:17:49.584 "num_base_bdevs_operational": 3, 00:17:49.584 "process": { 00:17:49.584 "type": "rebuild", 00:17:49.584 "target": "spare", 00:17:49.584 "progress": { 00:17:49.584 "blocks": 18432, 00:17:49.584 "percent": 14 00:17:49.584 } 00:17:49.584 }, 00:17:49.584 "base_bdevs_list": [ 00:17:49.584 { 00:17:49.584 "name": "spare", 00:17:49.584 "uuid": "cb1afcf0-ebc6-57a5-9c52-04ba2c3bbbcd", 00:17:49.584 "is_configured": true, 00:17:49.584 "data_offset": 0, 00:17:49.584 "data_size": 65536 00:17:49.584 }, 00:17:49.584 { 00:17:49.584 "name": "BaseBdev2", 00:17:49.584 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:49.584 "is_configured": true, 00:17:49.584 "data_offset": 0, 00:17:49.584 "data_size": 65536 00:17:49.584 }, 00:17:49.584 { 00:17:49.584 "name": "BaseBdev3", 00:17:49.584 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:49.584 "is_configured": true, 00:17:49.584 "data_offset": 0, 00:17:49.584 "data_size": 65536 00:17:49.584 } 00:17:49.584 ] 00:17:49.584 }' 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.584 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.584 [2024-10-17 20:14:35.194482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.841 [2024-10-17 20:14:35.240076] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:49.841 [2024-10-17 20:14:35.240159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.841 [2024-10-17 20:14:35.240204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.841 [2024-10-17 20:14:35.240216] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:49.841 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.841 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:49.841 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.842 "name": "raid_bdev1", 00:17:49.842 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:49.842 "strip_size_kb": 64, 00:17:49.842 "state": "online", 00:17:49.842 "raid_level": "raid5f", 00:17:49.842 "superblock": false, 00:17:49.842 "num_base_bdevs": 3, 00:17:49.842 "num_base_bdevs_discovered": 2, 00:17:49.842 "num_base_bdevs_operational": 2, 00:17:49.842 "base_bdevs_list": [ 00:17:49.842 { 00:17:49.842 "name": null, 00:17:49.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.842 "is_configured": false, 00:17:49.842 "data_offset": 0, 00:17:49.842 "data_size": 65536 00:17:49.842 }, 00:17:49.842 { 00:17:49.842 "name": "BaseBdev2", 00:17:49.842 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:49.842 "is_configured": true, 00:17:49.842 "data_offset": 0, 00:17:49.842 "data_size": 65536 00:17:49.842 }, 00:17:49.842 { 00:17:49.842 "name": "BaseBdev3", 00:17:49.842 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:49.842 "is_configured": true, 00:17:49.842 "data_offset": 0, 00:17:49.842 "data_size": 65536 00:17:49.842 } 00:17:49.842 ] 00:17:49.842 }' 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.842 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.407 "name": "raid_bdev1", 00:17:50.407 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:50.407 "strip_size_kb": 64, 00:17:50.407 "state": "online", 00:17:50.407 "raid_level": "raid5f", 00:17:50.407 "superblock": false, 00:17:50.407 "num_base_bdevs": 3, 00:17:50.407 "num_base_bdevs_discovered": 2, 00:17:50.407 "num_base_bdevs_operational": 2, 00:17:50.407 "base_bdevs_list": [ 00:17:50.407 { 00:17:50.407 "name": null, 00:17:50.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.407 "is_configured": false, 00:17:50.407 "data_offset": 0, 00:17:50.407 "data_size": 65536 00:17:50.407 }, 00:17:50.407 { 00:17:50.407 "name": "BaseBdev2", 00:17:50.407 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:50.407 "is_configured": true, 00:17:50.407 "data_offset": 0, 00:17:50.407 "data_size": 65536 00:17:50.407 }, 00:17:50.407 { 00:17:50.407 "name": "BaseBdev3", 00:17:50.407 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:50.407 "is_configured": true, 00:17:50.407 "data_offset": 0, 00:17:50.407 "data_size": 65536 00:17:50.407 } 00:17:50.407 ] 00:17:50.407 }' 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.407 [2024-10-17 20:14:35.920873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.407 [2024-10-17 20:14:35.934787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.407 20:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:50.407 [2024-10-17 20:14:35.941801] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:51.367 20:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.367 20:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.367 20:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.367 20:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.367 20:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.367 20:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.367 20:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.367 20:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.367 20:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.368 20:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.368 20:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.368 "name": "raid_bdev1", 00:17:51.368 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:51.368 "strip_size_kb": 64, 00:17:51.368 "state": "online", 00:17:51.368 "raid_level": "raid5f", 00:17:51.368 "superblock": false, 00:17:51.368 "num_base_bdevs": 3, 00:17:51.368 "num_base_bdevs_discovered": 3, 00:17:51.368 "num_base_bdevs_operational": 3, 00:17:51.368 "process": { 00:17:51.368 "type": "rebuild", 00:17:51.368 "target": "spare", 00:17:51.368 "progress": { 00:17:51.368 "blocks": 18432, 00:17:51.368 "percent": 14 00:17:51.368 } 00:17:51.368 }, 00:17:51.368 "base_bdevs_list": [ 00:17:51.368 { 00:17:51.368 "name": "spare", 00:17:51.368 "uuid": "cb1afcf0-ebc6-57a5-9c52-04ba2c3bbbcd", 00:17:51.368 "is_configured": true, 00:17:51.368 "data_offset": 0, 00:17:51.368 "data_size": 65536 00:17:51.368 }, 00:17:51.368 { 00:17:51.368 "name": "BaseBdev2", 00:17:51.368 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:51.368 "is_configured": true, 00:17:51.368 "data_offset": 0, 00:17:51.368 "data_size": 65536 00:17:51.368 }, 00:17:51.368 { 00:17:51.368 "name": "BaseBdev3", 00:17:51.368 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:51.368 "is_configured": true, 00:17:51.368 "data_offset": 0, 00:17:51.368 "data_size": 65536 00:17:51.368 } 00:17:51.368 ] 00:17:51.368 }' 00:17:51.368 20:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=592 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.627 "name": "raid_bdev1", 00:17:51.627 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:51.627 "strip_size_kb": 64, 00:17:51.627 "state": "online", 00:17:51.627 "raid_level": "raid5f", 00:17:51.627 "superblock": false, 00:17:51.627 "num_base_bdevs": 3, 00:17:51.627 "num_base_bdevs_discovered": 3, 00:17:51.627 "num_base_bdevs_operational": 3, 00:17:51.627 "process": { 00:17:51.627 "type": "rebuild", 00:17:51.627 "target": "spare", 00:17:51.627 "progress": { 00:17:51.627 "blocks": 22528, 00:17:51.627 "percent": 17 00:17:51.627 } 00:17:51.627 }, 00:17:51.627 "base_bdevs_list": [ 00:17:51.627 { 00:17:51.627 "name": "spare", 00:17:51.627 "uuid": "cb1afcf0-ebc6-57a5-9c52-04ba2c3bbbcd", 00:17:51.627 "is_configured": true, 00:17:51.627 "data_offset": 0, 00:17:51.627 "data_size": 65536 00:17:51.627 }, 00:17:51.627 { 00:17:51.627 "name": "BaseBdev2", 00:17:51.627 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:51.627 "is_configured": true, 00:17:51.627 "data_offset": 0, 00:17:51.627 "data_size": 65536 00:17:51.627 }, 00:17:51.627 { 00:17:51.627 "name": "BaseBdev3", 00:17:51.627 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:51.627 "is_configured": true, 00:17:51.627 "data_offset": 0, 00:17:51.627 "data_size": 65536 00:17:51.627 } 00:17:51.627 ] 00:17:51.627 }' 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.627 20:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:52.999 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:52.999 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.999 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.999 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.999 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.999 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.999 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.999 20:14:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.999 20:14:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.999 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.999 20:14:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.999 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.999 "name": "raid_bdev1", 00:17:52.999 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:52.999 "strip_size_kb": 64, 00:17:52.999 "state": "online", 00:17:52.999 "raid_level": "raid5f", 00:17:52.999 "superblock": false, 00:17:52.999 "num_base_bdevs": 3, 00:17:52.999 "num_base_bdevs_discovered": 3, 00:17:52.999 "num_base_bdevs_operational": 3, 00:17:52.999 "process": { 00:17:52.999 "type": "rebuild", 00:17:52.999 "target": "spare", 00:17:52.999 "progress": { 00:17:52.999 "blocks": 45056, 00:17:52.999 "percent": 34 00:17:52.999 } 00:17:52.999 }, 00:17:52.999 "base_bdevs_list": [ 00:17:52.999 { 00:17:53.000 "name": "spare", 00:17:53.000 "uuid": "cb1afcf0-ebc6-57a5-9c52-04ba2c3bbbcd", 00:17:53.000 "is_configured": true, 00:17:53.000 "data_offset": 0, 00:17:53.000 "data_size": 65536 00:17:53.000 }, 00:17:53.000 { 00:17:53.000 "name": "BaseBdev2", 00:17:53.000 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:53.000 "is_configured": true, 00:17:53.000 "data_offset": 0, 00:17:53.000 "data_size": 65536 00:17:53.000 }, 00:17:53.000 { 00:17:53.000 "name": "BaseBdev3", 00:17:53.000 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:53.000 "is_configured": true, 00:17:53.000 "data_offset": 0, 00:17:53.000 "data_size": 65536 00:17:53.000 } 00:17:53.000 ] 00:17:53.000 }' 00:17:53.000 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.000 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.000 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.000 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.000 20:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.935 "name": "raid_bdev1", 00:17:53.935 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:53.935 "strip_size_kb": 64, 00:17:53.935 "state": "online", 00:17:53.935 "raid_level": "raid5f", 00:17:53.935 "superblock": false, 00:17:53.935 "num_base_bdevs": 3, 00:17:53.935 "num_base_bdevs_discovered": 3, 00:17:53.935 "num_base_bdevs_operational": 3, 00:17:53.935 "process": { 00:17:53.935 "type": "rebuild", 00:17:53.935 "target": "spare", 00:17:53.935 "progress": { 00:17:53.935 "blocks": 69632, 00:17:53.935 "percent": 53 00:17:53.935 } 00:17:53.935 }, 00:17:53.935 "base_bdevs_list": [ 00:17:53.935 { 00:17:53.935 "name": "spare", 00:17:53.935 "uuid": "cb1afcf0-ebc6-57a5-9c52-04ba2c3bbbcd", 00:17:53.935 "is_configured": true, 00:17:53.935 "data_offset": 0, 00:17:53.935 "data_size": 65536 00:17:53.935 }, 00:17:53.935 { 00:17:53.935 "name": "BaseBdev2", 00:17:53.935 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:53.935 "is_configured": true, 00:17:53.935 "data_offset": 0, 00:17:53.935 "data_size": 65536 00:17:53.935 }, 00:17:53.935 { 00:17:53.935 "name": "BaseBdev3", 00:17:53.935 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:53.935 "is_configured": true, 00:17:53.935 "data_offset": 0, 00:17:53.935 "data_size": 65536 00:17:53.935 } 00:17:53.935 ] 00:17:53.935 }' 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.935 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.936 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.936 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.936 20:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.350 "name": "raid_bdev1", 00:17:55.350 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:55.350 "strip_size_kb": 64, 00:17:55.350 "state": "online", 00:17:55.350 "raid_level": "raid5f", 00:17:55.350 "superblock": false, 00:17:55.350 "num_base_bdevs": 3, 00:17:55.350 "num_base_bdevs_discovered": 3, 00:17:55.350 "num_base_bdevs_operational": 3, 00:17:55.350 "process": { 00:17:55.350 "type": "rebuild", 00:17:55.350 "target": "spare", 00:17:55.350 "progress": { 00:17:55.350 "blocks": 92160, 00:17:55.350 "percent": 70 00:17:55.350 } 00:17:55.350 }, 00:17:55.350 "base_bdevs_list": [ 00:17:55.350 { 00:17:55.350 "name": "spare", 00:17:55.350 "uuid": "cb1afcf0-ebc6-57a5-9c52-04ba2c3bbbcd", 00:17:55.350 "is_configured": true, 00:17:55.350 "data_offset": 0, 00:17:55.350 "data_size": 65536 00:17:55.350 }, 00:17:55.350 { 00:17:55.350 "name": "BaseBdev2", 00:17:55.350 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:55.350 "is_configured": true, 00:17:55.350 "data_offset": 0, 00:17:55.350 "data_size": 65536 00:17:55.350 }, 00:17:55.350 { 00:17:55.350 "name": "BaseBdev3", 00:17:55.350 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:55.350 "is_configured": true, 00:17:55.350 "data_offset": 0, 00:17:55.350 "data_size": 65536 00:17:55.350 } 00:17:55.350 ] 00:17:55.350 }' 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.350 20:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.286 "name": "raid_bdev1", 00:17:56.286 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:56.286 "strip_size_kb": 64, 00:17:56.286 "state": "online", 00:17:56.286 "raid_level": "raid5f", 00:17:56.286 "superblock": false, 00:17:56.286 "num_base_bdevs": 3, 00:17:56.286 "num_base_bdevs_discovered": 3, 00:17:56.286 "num_base_bdevs_operational": 3, 00:17:56.286 "process": { 00:17:56.286 "type": "rebuild", 00:17:56.286 "target": "spare", 00:17:56.286 "progress": { 00:17:56.286 "blocks": 116736, 00:17:56.286 "percent": 89 00:17:56.286 } 00:17:56.286 }, 00:17:56.286 "base_bdevs_list": [ 00:17:56.286 { 00:17:56.286 "name": "spare", 00:17:56.286 "uuid": "cb1afcf0-ebc6-57a5-9c52-04ba2c3bbbcd", 00:17:56.286 "is_configured": true, 00:17:56.286 "data_offset": 0, 00:17:56.286 "data_size": 65536 00:17:56.286 }, 00:17:56.286 { 00:17:56.286 "name": "BaseBdev2", 00:17:56.286 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:56.286 "is_configured": true, 00:17:56.286 "data_offset": 0, 00:17:56.286 "data_size": 65536 00:17:56.286 }, 00:17:56.286 { 00:17:56.286 "name": "BaseBdev3", 00:17:56.286 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:56.286 "is_configured": true, 00:17:56.286 "data_offset": 0, 00:17:56.286 "data_size": 65536 00:17:56.286 } 00:17:56.286 ] 00:17:56.286 }' 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.286 20:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:56.853 [2024-10-17 20:14:42.417135] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:56.853 [2024-10-17 20:14:42.417261] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:56.853 [2024-10-17 20:14:42.417340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.420 20:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:57.420 20:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.420 20:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.420 20:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.420 20:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.420 20:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.420 20:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.420 20:14:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.420 20:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.420 20:14:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.420 20:14:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.420 20:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.420 "name": "raid_bdev1", 00:17:57.420 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:57.420 "strip_size_kb": 64, 00:17:57.420 "state": "online", 00:17:57.420 "raid_level": "raid5f", 00:17:57.420 "superblock": false, 00:17:57.420 "num_base_bdevs": 3, 00:17:57.420 "num_base_bdevs_discovered": 3, 00:17:57.420 "num_base_bdevs_operational": 3, 00:17:57.420 "base_bdevs_list": [ 00:17:57.420 { 00:17:57.420 "name": "spare", 00:17:57.420 "uuid": "cb1afcf0-ebc6-57a5-9c52-04ba2c3bbbcd", 00:17:57.420 "is_configured": true, 00:17:57.420 "data_offset": 0, 00:17:57.420 "data_size": 65536 00:17:57.420 }, 00:17:57.420 { 00:17:57.420 "name": "BaseBdev2", 00:17:57.420 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:57.420 "is_configured": true, 00:17:57.420 "data_offset": 0, 00:17:57.420 "data_size": 65536 00:17:57.420 }, 00:17:57.420 { 00:17:57.420 "name": "BaseBdev3", 00:17:57.420 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:57.420 "is_configured": true, 00:17:57.420 "data_offset": 0, 00:17:57.420 "data_size": 65536 00:17:57.420 } 00:17:57.420 ] 00:17:57.420 }' 00:17:57.420 20:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.420 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:57.420 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.420 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:57.420 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:57.420 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:57.420 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.420 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:57.420 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:57.420 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.420 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.420 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.420 20:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.420 20:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.679 "name": "raid_bdev1", 00:17:57.679 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:57.679 "strip_size_kb": 64, 00:17:57.679 "state": "online", 00:17:57.679 "raid_level": "raid5f", 00:17:57.679 "superblock": false, 00:17:57.679 "num_base_bdevs": 3, 00:17:57.679 "num_base_bdevs_discovered": 3, 00:17:57.679 "num_base_bdevs_operational": 3, 00:17:57.679 "base_bdevs_list": [ 00:17:57.679 { 00:17:57.679 "name": "spare", 00:17:57.679 "uuid": "cb1afcf0-ebc6-57a5-9c52-04ba2c3bbbcd", 00:17:57.679 "is_configured": true, 00:17:57.679 "data_offset": 0, 00:17:57.679 "data_size": 65536 00:17:57.679 }, 00:17:57.679 { 00:17:57.679 "name": "BaseBdev2", 00:17:57.679 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:57.679 "is_configured": true, 00:17:57.679 "data_offset": 0, 00:17:57.679 "data_size": 65536 00:17:57.679 }, 00:17:57.679 { 00:17:57.679 "name": "BaseBdev3", 00:17:57.679 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:57.679 "is_configured": true, 00:17:57.679 "data_offset": 0, 00:17:57.679 "data_size": 65536 00:17:57.679 } 00:17:57.679 ] 00:17:57.679 }' 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.679 "name": "raid_bdev1", 00:17:57.679 "uuid": "1b2a19cb-fbde-4417-840e-e0a2c20f829e", 00:17:57.679 "strip_size_kb": 64, 00:17:57.679 "state": "online", 00:17:57.679 "raid_level": "raid5f", 00:17:57.679 "superblock": false, 00:17:57.679 "num_base_bdevs": 3, 00:17:57.679 "num_base_bdevs_discovered": 3, 00:17:57.679 "num_base_bdevs_operational": 3, 00:17:57.679 "base_bdevs_list": [ 00:17:57.679 { 00:17:57.679 "name": "spare", 00:17:57.679 "uuid": "cb1afcf0-ebc6-57a5-9c52-04ba2c3bbbcd", 00:17:57.679 "is_configured": true, 00:17:57.679 "data_offset": 0, 00:17:57.679 "data_size": 65536 00:17:57.679 }, 00:17:57.679 { 00:17:57.679 "name": "BaseBdev2", 00:17:57.679 "uuid": "a48529ef-4887-5477-ab80-5db167ce3b5e", 00:17:57.679 "is_configured": true, 00:17:57.679 "data_offset": 0, 00:17:57.679 "data_size": 65536 00:17:57.679 }, 00:17:57.679 { 00:17:57.679 "name": "BaseBdev3", 00:17:57.679 "uuid": "ac849af3-4f12-5b7c-922c-7dad1a418fe5", 00:17:57.679 "is_configured": true, 00:17:57.679 "data_offset": 0, 00:17:57.679 "data_size": 65536 00:17:57.679 } 00:17:57.679 ] 00:17:57.679 }' 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.679 20:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.270 [2024-10-17 20:14:43.778176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.270 [2024-10-17 20:14:43.778214] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.270 [2024-10-17 20:14:43.778318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.270 [2024-10-17 20:14:43.778442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.270 [2024-10-17 20:14:43.778475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:58.270 20:14:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:58.565 /dev/nbd0 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:58.565 1+0 records in 00:17:58.565 1+0 records out 00:17:58.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395135 s, 10.4 MB/s 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:58.565 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:59.131 /dev/nbd1 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:59.131 1+0 records in 00:17:59.131 1+0 records out 00:17:59.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403898 s, 10.1 MB/s 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:59.131 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:59.390 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:59.390 20:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:59.390 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:59.390 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:59.390 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:59.390 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:59.390 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:59.390 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:59.390 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:59.390 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81834 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 81834 ']' 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 81834 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81834 00:17:59.648 killing process with pid 81834 00:17:59.648 Received shutdown signal, test time was about 60.000000 seconds 00:17:59.648 00:17:59.648 Latency(us) 00:17:59.648 [2024-10-17T20:14:45.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.648 [2024-10-17T20:14:45.302Z] =================================================================================================================== 00:17:59.648 [2024-10-17T20:14:45.302Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81834' 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 81834 00:17:59.648 [2024-10-17 20:14:45.290310] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:59.648 20:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 81834 00:18:00.214 [2024-10-17 20:14:45.588297] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.148 20:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:01.149 00:18:01.149 real 0m16.094s 00:18:01.149 user 0m20.662s 00:18:01.149 sys 0m1.935s 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.149 ************************************ 00:18:01.149 END TEST raid5f_rebuild_test 00:18:01.149 ************************************ 00:18:01.149 20:14:46 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:18:01.149 20:14:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:01.149 20:14:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:01.149 20:14:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.149 ************************************ 00:18:01.149 START TEST raid5f_rebuild_test_sb 00:18:01.149 ************************************ 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82290 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82290 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82290 ']' 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.149 20:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.149 [2024-10-17 20:14:46.683855] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:18:01.149 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:01.149 Zero copy mechanism will not be used. 00:18:01.149 [2024-10-17 20:14:46.684098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82290 ] 00:18:01.408 [2024-10-17 20:14:46.860209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.408 [2024-10-17 20:14:46.976686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.666 [2024-10-17 20:14:47.166636] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.666 [2024-10-17 20:14:47.166708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.234 BaseBdev1_malloc 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.234 [2024-10-17 20:14:47.676731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:02.234 [2024-10-17 20:14:47.676836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.234 [2024-10-17 20:14:47.676870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:02.234 [2024-10-17 20:14:47.676889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.234 [2024-10-17 20:14:47.679507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.234 [2024-10-17 20:14:47.679583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:02.234 BaseBdev1 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.234 BaseBdev2_malloc 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.234 [2024-10-17 20:14:47.723426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:02.234 [2024-10-17 20:14:47.723520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.234 [2024-10-17 20:14:47.723547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:02.234 [2024-10-17 20:14:47.723563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.234 [2024-10-17 20:14:47.726433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.234 [2024-10-17 20:14:47.726508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:02.234 BaseBdev2 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.234 BaseBdev3_malloc 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.234 [2024-10-17 20:14:47.779162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:02.234 [2024-10-17 20:14:47.779239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.234 [2024-10-17 20:14:47.779268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:02.234 [2024-10-17 20:14:47.779286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.234 [2024-10-17 20:14:47.781811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.234 [2024-10-17 20:14:47.781889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:02.234 BaseBdev3 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.234 spare_malloc 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.234 spare_delay 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.234 [2024-10-17 20:14:47.842426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:02.234 [2024-10-17 20:14:47.842503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.234 [2024-10-17 20:14:47.842557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:02.234 [2024-10-17 20:14:47.842574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.234 [2024-10-17 20:14:47.845814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.234 [2024-10-17 20:14:47.845895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:02.234 spare 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.234 [2024-10-17 20:14:47.850734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.234 [2024-10-17 20:14:47.853633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.234 [2024-10-17 20:14:47.853787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:02.234 [2024-10-17 20:14:47.854145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:02.234 [2024-10-17 20:14:47.854178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:02.234 [2024-10-17 20:14:47.854518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:02.234 [2024-10-17 20:14:47.860737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:02.234 [2024-10-17 20:14:47.860802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:02.234 [2024-10-17 20:14:47.861105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.234 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.235 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.235 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.235 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.235 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.235 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.235 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.235 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.235 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.235 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.235 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.495 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.495 "name": "raid_bdev1", 00:18:02.495 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:02.495 "strip_size_kb": 64, 00:18:02.495 "state": "online", 00:18:02.495 "raid_level": "raid5f", 00:18:02.495 "superblock": true, 00:18:02.495 "num_base_bdevs": 3, 00:18:02.495 "num_base_bdevs_discovered": 3, 00:18:02.495 "num_base_bdevs_operational": 3, 00:18:02.495 "base_bdevs_list": [ 00:18:02.495 { 00:18:02.495 "name": "BaseBdev1", 00:18:02.495 "uuid": "753d8cfa-06dd-5b39-9658-10010a7a8db0", 00:18:02.495 "is_configured": true, 00:18:02.495 "data_offset": 2048, 00:18:02.495 "data_size": 63488 00:18:02.495 }, 00:18:02.495 { 00:18:02.495 "name": "BaseBdev2", 00:18:02.495 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:02.495 "is_configured": true, 00:18:02.495 "data_offset": 2048, 00:18:02.495 "data_size": 63488 00:18:02.495 }, 00:18:02.495 { 00:18:02.495 "name": "BaseBdev3", 00:18:02.495 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:02.495 "is_configured": true, 00:18:02.495 "data_offset": 2048, 00:18:02.495 "data_size": 63488 00:18:02.495 } 00:18:02.495 ] 00:18:02.495 }' 00:18:02.495 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.495 20:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.063 [2024-10-17 20:14:48.455589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:03.063 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:03.322 [2024-10-17 20:14:48.859780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:03.322 /dev/nbd0 00:18:03.322 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:03.322 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:03.322 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.323 1+0 records in 00:18:03.323 1+0 records out 00:18:03.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335001 s, 12.2 MB/s 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:03.323 20:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:18:03.890 496+0 records in 00:18:03.890 496+0 records out 00:18:03.890 65011712 bytes (65 MB, 62 MiB) copied, 0.459497 s, 141 MB/s 00:18:03.890 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:03.890 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.890 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:03.890 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:03.890 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:03.890 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.890 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:04.149 [2024-10-17 20:14:49.730497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.149 [2024-10-17 20:14:49.744914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.149 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.407 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.407 "name": "raid_bdev1", 00:18:04.407 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:04.407 "strip_size_kb": 64, 00:18:04.407 "state": "online", 00:18:04.407 "raid_level": "raid5f", 00:18:04.407 "superblock": true, 00:18:04.407 "num_base_bdevs": 3, 00:18:04.407 "num_base_bdevs_discovered": 2, 00:18:04.407 "num_base_bdevs_operational": 2, 00:18:04.407 "base_bdevs_list": [ 00:18:04.407 { 00:18:04.407 "name": null, 00:18:04.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.407 "is_configured": false, 00:18:04.407 "data_offset": 0, 00:18:04.407 "data_size": 63488 00:18:04.407 }, 00:18:04.407 { 00:18:04.407 "name": "BaseBdev2", 00:18:04.407 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:04.407 "is_configured": true, 00:18:04.407 "data_offset": 2048, 00:18:04.407 "data_size": 63488 00:18:04.407 }, 00:18:04.407 { 00:18:04.407 "name": "BaseBdev3", 00:18:04.407 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:04.407 "is_configured": true, 00:18:04.407 "data_offset": 2048, 00:18:04.407 "data_size": 63488 00:18:04.407 } 00:18:04.407 ] 00:18:04.407 }' 00:18:04.407 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.407 20:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.666 20:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:04.666 20:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.666 20:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.666 [2024-10-17 20:14:50.289102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.666 [2024-10-17 20:14:50.304169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:18:04.666 20:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.666 20:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:04.666 [2024-10-17 20:14:50.311971] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.044 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.044 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.044 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.044 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.044 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.044 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.044 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.044 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.044 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.044 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.044 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.044 "name": "raid_bdev1", 00:18:06.044 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:06.044 "strip_size_kb": 64, 00:18:06.044 "state": "online", 00:18:06.044 "raid_level": "raid5f", 00:18:06.044 "superblock": true, 00:18:06.044 "num_base_bdevs": 3, 00:18:06.044 "num_base_bdevs_discovered": 3, 00:18:06.044 "num_base_bdevs_operational": 3, 00:18:06.044 "process": { 00:18:06.044 "type": "rebuild", 00:18:06.044 "target": "spare", 00:18:06.044 "progress": { 00:18:06.045 "blocks": 18432, 00:18:06.045 "percent": 14 00:18:06.045 } 00:18:06.045 }, 00:18:06.045 "base_bdevs_list": [ 00:18:06.045 { 00:18:06.045 "name": "spare", 00:18:06.045 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:06.045 "is_configured": true, 00:18:06.045 "data_offset": 2048, 00:18:06.045 "data_size": 63488 00:18:06.045 }, 00:18:06.045 { 00:18:06.045 "name": "BaseBdev2", 00:18:06.045 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:06.045 "is_configured": true, 00:18:06.045 "data_offset": 2048, 00:18:06.045 "data_size": 63488 00:18:06.045 }, 00:18:06.045 { 00:18:06.045 "name": "BaseBdev3", 00:18:06.045 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:06.045 "is_configured": true, 00:18:06.045 "data_offset": 2048, 00:18:06.045 "data_size": 63488 00:18:06.045 } 00:18:06.045 ] 00:18:06.045 }' 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.045 [2024-10-17 20:14:51.474292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.045 [2024-10-17 20:14:51.527653] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:06.045 [2024-10-17 20:14:51.527740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.045 [2024-10-17 20:14:51.527770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.045 [2024-10-17 20:14:51.527783] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.045 "name": "raid_bdev1", 00:18:06.045 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:06.045 "strip_size_kb": 64, 00:18:06.045 "state": "online", 00:18:06.045 "raid_level": "raid5f", 00:18:06.045 "superblock": true, 00:18:06.045 "num_base_bdevs": 3, 00:18:06.045 "num_base_bdevs_discovered": 2, 00:18:06.045 "num_base_bdevs_operational": 2, 00:18:06.045 "base_bdevs_list": [ 00:18:06.045 { 00:18:06.045 "name": null, 00:18:06.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.045 "is_configured": false, 00:18:06.045 "data_offset": 0, 00:18:06.045 "data_size": 63488 00:18:06.045 }, 00:18:06.045 { 00:18:06.045 "name": "BaseBdev2", 00:18:06.045 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:06.045 "is_configured": true, 00:18:06.045 "data_offset": 2048, 00:18:06.045 "data_size": 63488 00:18:06.045 }, 00:18:06.045 { 00:18:06.045 "name": "BaseBdev3", 00:18:06.045 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:06.045 "is_configured": true, 00:18:06.045 "data_offset": 2048, 00:18:06.045 "data_size": 63488 00:18:06.045 } 00:18:06.045 ] 00:18:06.045 }' 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.045 20:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.613 "name": "raid_bdev1", 00:18:06.613 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:06.613 "strip_size_kb": 64, 00:18:06.613 "state": "online", 00:18:06.613 "raid_level": "raid5f", 00:18:06.613 "superblock": true, 00:18:06.613 "num_base_bdevs": 3, 00:18:06.613 "num_base_bdevs_discovered": 2, 00:18:06.613 "num_base_bdevs_operational": 2, 00:18:06.613 "base_bdevs_list": [ 00:18:06.613 { 00:18:06.613 "name": null, 00:18:06.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.613 "is_configured": false, 00:18:06.613 "data_offset": 0, 00:18:06.613 "data_size": 63488 00:18:06.613 }, 00:18:06.613 { 00:18:06.613 "name": "BaseBdev2", 00:18:06.613 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:06.613 "is_configured": true, 00:18:06.613 "data_offset": 2048, 00:18:06.613 "data_size": 63488 00:18:06.613 }, 00:18:06.613 { 00:18:06.613 "name": "BaseBdev3", 00:18:06.613 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:06.613 "is_configured": true, 00:18:06.613 "data_offset": 2048, 00:18:06.613 "data_size": 63488 00:18:06.613 } 00:18:06.613 ] 00:18:06.613 }' 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.613 [2024-10-17 20:14:52.246263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.613 [2024-10-17 20:14:52.261594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.613 20:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:06.872 [2024-10-17 20:14:52.269065] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.808 "name": "raid_bdev1", 00:18:07.808 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:07.808 "strip_size_kb": 64, 00:18:07.808 "state": "online", 00:18:07.808 "raid_level": "raid5f", 00:18:07.808 "superblock": true, 00:18:07.808 "num_base_bdevs": 3, 00:18:07.808 "num_base_bdevs_discovered": 3, 00:18:07.808 "num_base_bdevs_operational": 3, 00:18:07.808 "process": { 00:18:07.808 "type": "rebuild", 00:18:07.808 "target": "spare", 00:18:07.808 "progress": { 00:18:07.808 "blocks": 18432, 00:18:07.808 "percent": 14 00:18:07.808 } 00:18:07.808 }, 00:18:07.808 "base_bdevs_list": [ 00:18:07.808 { 00:18:07.808 "name": "spare", 00:18:07.808 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:07.808 "is_configured": true, 00:18:07.808 "data_offset": 2048, 00:18:07.808 "data_size": 63488 00:18:07.808 }, 00:18:07.808 { 00:18:07.808 "name": "BaseBdev2", 00:18:07.808 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:07.808 "is_configured": true, 00:18:07.808 "data_offset": 2048, 00:18:07.808 "data_size": 63488 00:18:07.808 }, 00:18:07.808 { 00:18:07.808 "name": "BaseBdev3", 00:18:07.808 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:07.808 "is_configured": true, 00:18:07.808 "data_offset": 2048, 00:18:07.808 "data_size": 63488 00:18:07.808 } 00:18:07.808 ] 00:18:07.808 }' 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:07.808 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=608 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.808 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.068 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.068 "name": "raid_bdev1", 00:18:08.068 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:08.068 "strip_size_kb": 64, 00:18:08.068 "state": "online", 00:18:08.068 "raid_level": "raid5f", 00:18:08.068 "superblock": true, 00:18:08.068 "num_base_bdevs": 3, 00:18:08.068 "num_base_bdevs_discovered": 3, 00:18:08.068 "num_base_bdevs_operational": 3, 00:18:08.068 "process": { 00:18:08.068 "type": "rebuild", 00:18:08.068 "target": "spare", 00:18:08.068 "progress": { 00:18:08.068 "blocks": 22528, 00:18:08.068 "percent": 17 00:18:08.068 } 00:18:08.068 }, 00:18:08.068 "base_bdevs_list": [ 00:18:08.068 { 00:18:08.068 "name": "spare", 00:18:08.068 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:08.068 "is_configured": true, 00:18:08.068 "data_offset": 2048, 00:18:08.068 "data_size": 63488 00:18:08.068 }, 00:18:08.068 { 00:18:08.068 "name": "BaseBdev2", 00:18:08.068 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:08.068 "is_configured": true, 00:18:08.068 "data_offset": 2048, 00:18:08.068 "data_size": 63488 00:18:08.068 }, 00:18:08.068 { 00:18:08.068 "name": "BaseBdev3", 00:18:08.068 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:08.068 "is_configured": true, 00:18:08.068 "data_offset": 2048, 00:18:08.068 "data_size": 63488 00:18:08.068 } 00:18:08.068 ] 00:18:08.068 }' 00:18:08.068 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.068 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.068 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.068 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.068 20:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:09.004 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:09.004 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.004 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.004 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.004 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.004 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.004 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.004 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.004 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.004 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.004 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.262 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.262 "name": "raid_bdev1", 00:18:09.262 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:09.262 "strip_size_kb": 64, 00:18:09.262 "state": "online", 00:18:09.262 "raid_level": "raid5f", 00:18:09.262 "superblock": true, 00:18:09.262 "num_base_bdevs": 3, 00:18:09.262 "num_base_bdevs_discovered": 3, 00:18:09.262 "num_base_bdevs_operational": 3, 00:18:09.262 "process": { 00:18:09.262 "type": "rebuild", 00:18:09.262 "target": "spare", 00:18:09.262 "progress": { 00:18:09.262 "blocks": 47104, 00:18:09.262 "percent": 37 00:18:09.262 } 00:18:09.262 }, 00:18:09.262 "base_bdevs_list": [ 00:18:09.262 { 00:18:09.262 "name": "spare", 00:18:09.262 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:09.262 "is_configured": true, 00:18:09.262 "data_offset": 2048, 00:18:09.262 "data_size": 63488 00:18:09.262 }, 00:18:09.262 { 00:18:09.262 "name": "BaseBdev2", 00:18:09.262 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:09.262 "is_configured": true, 00:18:09.262 "data_offset": 2048, 00:18:09.262 "data_size": 63488 00:18:09.262 }, 00:18:09.262 { 00:18:09.262 "name": "BaseBdev3", 00:18:09.262 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:09.262 "is_configured": true, 00:18:09.262 "data_offset": 2048, 00:18:09.262 "data_size": 63488 00:18:09.262 } 00:18:09.262 ] 00:18:09.262 }' 00:18:09.262 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.262 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.262 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.262 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.262 20:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:10.198 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:10.198 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.198 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.198 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.198 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.198 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.198 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.198 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.198 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.198 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.198 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.198 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.198 "name": "raid_bdev1", 00:18:10.198 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:10.198 "strip_size_kb": 64, 00:18:10.198 "state": "online", 00:18:10.198 "raid_level": "raid5f", 00:18:10.198 "superblock": true, 00:18:10.198 "num_base_bdevs": 3, 00:18:10.198 "num_base_bdevs_discovered": 3, 00:18:10.198 "num_base_bdevs_operational": 3, 00:18:10.198 "process": { 00:18:10.198 "type": "rebuild", 00:18:10.198 "target": "spare", 00:18:10.198 "progress": { 00:18:10.198 "blocks": 69632, 00:18:10.198 "percent": 54 00:18:10.198 } 00:18:10.198 }, 00:18:10.198 "base_bdevs_list": [ 00:18:10.198 { 00:18:10.198 "name": "spare", 00:18:10.198 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:10.198 "is_configured": true, 00:18:10.198 "data_offset": 2048, 00:18:10.198 "data_size": 63488 00:18:10.198 }, 00:18:10.198 { 00:18:10.198 "name": "BaseBdev2", 00:18:10.198 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:10.198 "is_configured": true, 00:18:10.198 "data_offset": 2048, 00:18:10.198 "data_size": 63488 00:18:10.198 }, 00:18:10.198 { 00:18:10.198 "name": "BaseBdev3", 00:18:10.198 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:10.198 "is_configured": true, 00:18:10.198 "data_offset": 2048, 00:18:10.198 "data_size": 63488 00:18:10.198 } 00:18:10.198 ] 00:18:10.198 }' 00:18:10.198 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.457 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.457 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.457 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.457 20:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:11.441 20:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:11.442 20:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.442 20:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.442 20:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.442 20:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.442 20:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.442 20:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.442 20:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.442 20:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.442 20:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.442 20:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.442 20:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.442 "name": "raid_bdev1", 00:18:11.442 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:11.442 "strip_size_kb": 64, 00:18:11.442 "state": "online", 00:18:11.442 "raid_level": "raid5f", 00:18:11.442 "superblock": true, 00:18:11.442 "num_base_bdevs": 3, 00:18:11.442 "num_base_bdevs_discovered": 3, 00:18:11.442 "num_base_bdevs_operational": 3, 00:18:11.442 "process": { 00:18:11.442 "type": "rebuild", 00:18:11.442 "target": "spare", 00:18:11.442 "progress": { 00:18:11.442 "blocks": 94208, 00:18:11.442 "percent": 74 00:18:11.442 } 00:18:11.442 }, 00:18:11.442 "base_bdevs_list": [ 00:18:11.442 { 00:18:11.442 "name": "spare", 00:18:11.442 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:11.442 "is_configured": true, 00:18:11.442 "data_offset": 2048, 00:18:11.442 "data_size": 63488 00:18:11.442 }, 00:18:11.442 { 00:18:11.442 "name": "BaseBdev2", 00:18:11.442 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:11.442 "is_configured": true, 00:18:11.442 "data_offset": 2048, 00:18:11.442 "data_size": 63488 00:18:11.442 }, 00:18:11.442 { 00:18:11.442 "name": "BaseBdev3", 00:18:11.442 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:11.442 "is_configured": true, 00:18:11.442 "data_offset": 2048, 00:18:11.442 "data_size": 63488 00:18:11.442 } 00:18:11.442 ] 00:18:11.442 }' 00:18:11.442 20:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.442 20:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.442 20:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.716 20:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.716 20:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.655 "name": "raid_bdev1", 00:18:12.655 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:12.655 "strip_size_kb": 64, 00:18:12.655 "state": "online", 00:18:12.655 "raid_level": "raid5f", 00:18:12.655 "superblock": true, 00:18:12.655 "num_base_bdevs": 3, 00:18:12.655 "num_base_bdevs_discovered": 3, 00:18:12.655 "num_base_bdevs_operational": 3, 00:18:12.655 "process": { 00:18:12.655 "type": "rebuild", 00:18:12.655 "target": "spare", 00:18:12.655 "progress": { 00:18:12.655 "blocks": 116736, 00:18:12.655 "percent": 91 00:18:12.655 } 00:18:12.655 }, 00:18:12.655 "base_bdevs_list": [ 00:18:12.655 { 00:18:12.655 "name": "spare", 00:18:12.655 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:12.655 "is_configured": true, 00:18:12.655 "data_offset": 2048, 00:18:12.655 "data_size": 63488 00:18:12.655 }, 00:18:12.655 { 00:18:12.655 "name": "BaseBdev2", 00:18:12.655 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:12.655 "is_configured": true, 00:18:12.655 "data_offset": 2048, 00:18:12.655 "data_size": 63488 00:18:12.655 }, 00:18:12.655 { 00:18:12.655 "name": "BaseBdev3", 00:18:12.655 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:12.655 "is_configured": true, 00:18:12.655 "data_offset": 2048, 00:18:12.655 "data_size": 63488 00:18:12.655 } 00:18:12.655 ] 00:18:12.655 }' 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.655 20:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:12.914 [2024-10-17 20:14:58.543330] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:12.914 [2024-10-17 20:14:58.543434] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:12.914 [2024-10-17 20:14:58.543626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.851 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.851 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.851 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.851 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.851 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.851 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.851 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.851 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.851 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.851 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.851 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.851 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.851 "name": "raid_bdev1", 00:18:13.851 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:13.851 "strip_size_kb": 64, 00:18:13.851 "state": "online", 00:18:13.851 "raid_level": "raid5f", 00:18:13.851 "superblock": true, 00:18:13.851 "num_base_bdevs": 3, 00:18:13.851 "num_base_bdevs_discovered": 3, 00:18:13.851 "num_base_bdevs_operational": 3, 00:18:13.851 "base_bdevs_list": [ 00:18:13.851 { 00:18:13.851 "name": "spare", 00:18:13.851 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:13.851 "is_configured": true, 00:18:13.851 "data_offset": 2048, 00:18:13.851 "data_size": 63488 00:18:13.851 }, 00:18:13.851 { 00:18:13.851 "name": "BaseBdev2", 00:18:13.851 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:13.852 "is_configured": true, 00:18:13.852 "data_offset": 2048, 00:18:13.852 "data_size": 63488 00:18:13.852 }, 00:18:13.852 { 00:18:13.852 "name": "BaseBdev3", 00:18:13.852 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:13.852 "is_configured": true, 00:18:13.852 "data_offset": 2048, 00:18:13.852 "data_size": 63488 00:18:13.852 } 00:18:13.852 ] 00:18:13.852 }' 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.852 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.110 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.110 "name": "raid_bdev1", 00:18:14.110 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:14.110 "strip_size_kb": 64, 00:18:14.110 "state": "online", 00:18:14.110 "raid_level": "raid5f", 00:18:14.110 "superblock": true, 00:18:14.110 "num_base_bdevs": 3, 00:18:14.110 "num_base_bdevs_discovered": 3, 00:18:14.110 "num_base_bdevs_operational": 3, 00:18:14.110 "base_bdevs_list": [ 00:18:14.110 { 00:18:14.110 "name": "spare", 00:18:14.110 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:14.110 "is_configured": true, 00:18:14.110 "data_offset": 2048, 00:18:14.110 "data_size": 63488 00:18:14.110 }, 00:18:14.110 { 00:18:14.110 "name": "BaseBdev2", 00:18:14.110 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:14.110 "is_configured": true, 00:18:14.110 "data_offset": 2048, 00:18:14.110 "data_size": 63488 00:18:14.110 }, 00:18:14.110 { 00:18:14.110 "name": "BaseBdev3", 00:18:14.110 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:14.110 "is_configured": true, 00:18:14.110 "data_offset": 2048, 00:18:14.110 "data_size": 63488 00:18:14.110 } 00:18:14.110 ] 00:18:14.110 }' 00:18:14.110 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.110 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.110 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.110 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.111 "name": "raid_bdev1", 00:18:14.111 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:14.111 "strip_size_kb": 64, 00:18:14.111 "state": "online", 00:18:14.111 "raid_level": "raid5f", 00:18:14.111 "superblock": true, 00:18:14.111 "num_base_bdevs": 3, 00:18:14.111 "num_base_bdevs_discovered": 3, 00:18:14.111 "num_base_bdevs_operational": 3, 00:18:14.111 "base_bdevs_list": [ 00:18:14.111 { 00:18:14.111 "name": "spare", 00:18:14.111 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:14.111 "is_configured": true, 00:18:14.111 "data_offset": 2048, 00:18:14.111 "data_size": 63488 00:18:14.111 }, 00:18:14.111 { 00:18:14.111 "name": "BaseBdev2", 00:18:14.111 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:14.111 "is_configured": true, 00:18:14.111 "data_offset": 2048, 00:18:14.111 "data_size": 63488 00:18:14.111 }, 00:18:14.111 { 00:18:14.111 "name": "BaseBdev3", 00:18:14.111 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:14.111 "is_configured": true, 00:18:14.111 "data_offset": 2048, 00:18:14.111 "data_size": 63488 00:18:14.111 } 00:18:14.111 ] 00:18:14.111 }' 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.111 20:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.678 [2024-10-17 20:15:00.156895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.678 [2024-10-17 20:15:00.156927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.678 [2024-10-17 20:15:00.157045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.678 [2024-10-17 20:15:00.157200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.678 [2024-10-17 20:15:00.157227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:14.678 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:14.679 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:14.679 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:14.937 /dev/nbd0 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:14.937 1+0 records in 00:18:14.937 1+0 records out 00:18:14.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318535 s, 12.9 MB/s 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:14.937 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:15.196 /dev/nbd1 00:18:15.196 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:15.457 1+0 records in 00:18:15.457 1+0 records out 00:18:15.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374451 s, 10.9 MB/s 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:15.457 20:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:15.457 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:15.457 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:15.457 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:15.457 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:15.457 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:15.457 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.457 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:15.716 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:15.716 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:15.716 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:15.716 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:15.716 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:15.716 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:15.716 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:15.716 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:15.716 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.716 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:15.976 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:15.976 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.236 [2024-10-17 20:15:01.651290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:16.236 [2024-10-17 20:15:01.651367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.236 [2024-10-17 20:15:01.651395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:16.236 [2024-10-17 20:15:01.651426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.236 [2024-10-17 20:15:01.654285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.236 [2024-10-17 20:15:01.654342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:16.236 [2024-10-17 20:15:01.654455] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:16.236 [2024-10-17 20:15:01.654529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.236 [2024-10-17 20:15:01.654684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.236 [2024-10-17 20:15:01.654837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.236 spare 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.236 [2024-10-17 20:15:01.754975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:16.236 [2024-10-17 20:15:01.755067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:16.236 [2024-10-17 20:15:01.755523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:18:16.236 [2024-10-17 20:15:01.760245] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:16.236 [2024-10-17 20:15:01.760270] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:16.236 [2024-10-17 20:15:01.760566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.236 "name": "raid_bdev1", 00:18:16.236 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:16.236 "strip_size_kb": 64, 00:18:16.236 "state": "online", 00:18:16.236 "raid_level": "raid5f", 00:18:16.236 "superblock": true, 00:18:16.236 "num_base_bdevs": 3, 00:18:16.236 "num_base_bdevs_discovered": 3, 00:18:16.236 "num_base_bdevs_operational": 3, 00:18:16.236 "base_bdevs_list": [ 00:18:16.236 { 00:18:16.236 "name": "spare", 00:18:16.236 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:16.236 "is_configured": true, 00:18:16.236 "data_offset": 2048, 00:18:16.236 "data_size": 63488 00:18:16.236 }, 00:18:16.236 { 00:18:16.236 "name": "BaseBdev2", 00:18:16.236 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:16.236 "is_configured": true, 00:18:16.236 "data_offset": 2048, 00:18:16.236 "data_size": 63488 00:18:16.236 }, 00:18:16.236 { 00:18:16.236 "name": "BaseBdev3", 00:18:16.236 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:16.236 "is_configured": true, 00:18:16.236 "data_offset": 2048, 00:18:16.236 "data_size": 63488 00:18:16.236 } 00:18:16.236 ] 00:18:16.236 }' 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.236 20:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.804 "name": "raid_bdev1", 00:18:16.804 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:16.804 "strip_size_kb": 64, 00:18:16.804 "state": "online", 00:18:16.804 "raid_level": "raid5f", 00:18:16.804 "superblock": true, 00:18:16.804 "num_base_bdevs": 3, 00:18:16.804 "num_base_bdevs_discovered": 3, 00:18:16.804 "num_base_bdevs_operational": 3, 00:18:16.804 "base_bdevs_list": [ 00:18:16.804 { 00:18:16.804 "name": "spare", 00:18:16.804 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:16.804 "is_configured": true, 00:18:16.804 "data_offset": 2048, 00:18:16.804 "data_size": 63488 00:18:16.804 }, 00:18:16.804 { 00:18:16.804 "name": "BaseBdev2", 00:18:16.804 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:16.804 "is_configured": true, 00:18:16.804 "data_offset": 2048, 00:18:16.804 "data_size": 63488 00:18:16.804 }, 00:18:16.804 { 00:18:16.804 "name": "BaseBdev3", 00:18:16.804 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:16.804 "is_configured": true, 00:18:16.804 "data_offset": 2048, 00:18:16.804 "data_size": 63488 00:18:16.804 } 00:18:16.804 ] 00:18:16.804 }' 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.804 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.063 [2024-10-17 20:15:02.506189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.063 "name": "raid_bdev1", 00:18:17.063 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:17.063 "strip_size_kb": 64, 00:18:17.063 "state": "online", 00:18:17.063 "raid_level": "raid5f", 00:18:17.063 "superblock": true, 00:18:17.063 "num_base_bdevs": 3, 00:18:17.063 "num_base_bdevs_discovered": 2, 00:18:17.063 "num_base_bdevs_operational": 2, 00:18:17.063 "base_bdevs_list": [ 00:18:17.063 { 00:18:17.063 "name": null, 00:18:17.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.063 "is_configured": false, 00:18:17.063 "data_offset": 0, 00:18:17.063 "data_size": 63488 00:18:17.063 }, 00:18:17.063 { 00:18:17.063 "name": "BaseBdev2", 00:18:17.063 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:17.063 "is_configured": true, 00:18:17.063 "data_offset": 2048, 00:18:17.063 "data_size": 63488 00:18:17.063 }, 00:18:17.063 { 00:18:17.063 "name": "BaseBdev3", 00:18:17.063 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:17.063 "is_configured": true, 00:18:17.063 "data_offset": 2048, 00:18:17.063 "data_size": 63488 00:18:17.063 } 00:18:17.063 ] 00:18:17.063 }' 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.063 20:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.629 20:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:17.629 20:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.629 20:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.629 [2024-10-17 20:15:03.026394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.629 [2024-10-17 20:15:03.026638] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:17.629 [2024-10-17 20:15:03.026664] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:17.629 [2024-10-17 20:15:03.026726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.629 [2024-10-17 20:15:03.040862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:18:17.629 20:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.629 20:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:17.629 [2024-10-17 20:15:03.047898] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.565 "name": "raid_bdev1", 00:18:18.565 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:18.565 "strip_size_kb": 64, 00:18:18.565 "state": "online", 00:18:18.565 "raid_level": "raid5f", 00:18:18.565 "superblock": true, 00:18:18.565 "num_base_bdevs": 3, 00:18:18.565 "num_base_bdevs_discovered": 3, 00:18:18.565 "num_base_bdevs_operational": 3, 00:18:18.565 "process": { 00:18:18.565 "type": "rebuild", 00:18:18.565 "target": "spare", 00:18:18.565 "progress": { 00:18:18.565 "blocks": 18432, 00:18:18.565 "percent": 14 00:18:18.565 } 00:18:18.565 }, 00:18:18.565 "base_bdevs_list": [ 00:18:18.565 { 00:18:18.565 "name": "spare", 00:18:18.565 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:18.565 "is_configured": true, 00:18:18.565 "data_offset": 2048, 00:18:18.565 "data_size": 63488 00:18:18.565 }, 00:18:18.565 { 00:18:18.565 "name": "BaseBdev2", 00:18:18.565 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:18.565 "is_configured": true, 00:18:18.565 "data_offset": 2048, 00:18:18.565 "data_size": 63488 00:18:18.565 }, 00:18:18.565 { 00:18:18.565 "name": "BaseBdev3", 00:18:18.565 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:18.565 "is_configured": true, 00:18:18.565 "data_offset": 2048, 00:18:18.565 "data_size": 63488 00:18:18.565 } 00:18:18.565 ] 00:18:18.565 }' 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.565 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.825 [2024-10-17 20:15:04.215113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.825 [2024-10-17 20:15:04.263267] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:18.825 [2024-10-17 20:15:04.263356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.825 [2024-10-17 20:15:04.263378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.825 [2024-10-17 20:15:04.263391] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.825 "name": "raid_bdev1", 00:18:18.825 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:18.825 "strip_size_kb": 64, 00:18:18.825 "state": "online", 00:18:18.825 "raid_level": "raid5f", 00:18:18.825 "superblock": true, 00:18:18.825 "num_base_bdevs": 3, 00:18:18.825 "num_base_bdevs_discovered": 2, 00:18:18.825 "num_base_bdevs_operational": 2, 00:18:18.825 "base_bdevs_list": [ 00:18:18.825 { 00:18:18.825 "name": null, 00:18:18.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.825 "is_configured": false, 00:18:18.825 "data_offset": 0, 00:18:18.825 "data_size": 63488 00:18:18.825 }, 00:18:18.825 { 00:18:18.825 "name": "BaseBdev2", 00:18:18.825 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:18.825 "is_configured": true, 00:18:18.825 "data_offset": 2048, 00:18:18.825 "data_size": 63488 00:18:18.825 }, 00:18:18.825 { 00:18:18.825 "name": "BaseBdev3", 00:18:18.825 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:18.825 "is_configured": true, 00:18:18.825 "data_offset": 2048, 00:18:18.825 "data_size": 63488 00:18:18.825 } 00:18:18.825 ] 00:18:18.825 }' 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.825 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.391 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:19.391 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.391 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.391 [2024-10-17 20:15:04.816233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:19.391 [2024-10-17 20:15:04.816326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.391 [2024-10-17 20:15:04.816374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:18:19.391 [2024-10-17 20:15:04.816395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.391 [2024-10-17 20:15:04.817053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.391 [2024-10-17 20:15:04.817122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:19.391 [2024-10-17 20:15:04.817265] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:19.391 [2024-10-17 20:15:04.817290] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:19.391 [2024-10-17 20:15:04.817304] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:19.391 [2024-10-17 20:15:04.817336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.391 [2024-10-17 20:15:04.830853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:18:19.391 spare 00:18:19.391 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.391 20:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:19.391 [2024-10-17 20:15:04.837932] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:20.326 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.326 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.326 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.326 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.326 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.326 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.326 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.327 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.327 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.327 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.327 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.327 "name": "raid_bdev1", 00:18:20.327 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:20.327 "strip_size_kb": 64, 00:18:20.327 "state": "online", 00:18:20.327 "raid_level": "raid5f", 00:18:20.327 "superblock": true, 00:18:20.327 "num_base_bdevs": 3, 00:18:20.327 "num_base_bdevs_discovered": 3, 00:18:20.327 "num_base_bdevs_operational": 3, 00:18:20.327 "process": { 00:18:20.327 "type": "rebuild", 00:18:20.327 "target": "spare", 00:18:20.327 "progress": { 00:18:20.327 "blocks": 18432, 00:18:20.327 "percent": 14 00:18:20.327 } 00:18:20.327 }, 00:18:20.327 "base_bdevs_list": [ 00:18:20.327 { 00:18:20.327 "name": "spare", 00:18:20.327 "uuid": "13e40678-b775-54c3-877a-20d9ef98aa74", 00:18:20.327 "is_configured": true, 00:18:20.327 "data_offset": 2048, 00:18:20.327 "data_size": 63488 00:18:20.327 }, 00:18:20.327 { 00:18:20.327 "name": "BaseBdev2", 00:18:20.327 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:20.327 "is_configured": true, 00:18:20.327 "data_offset": 2048, 00:18:20.327 "data_size": 63488 00:18:20.327 }, 00:18:20.327 { 00:18:20.327 "name": "BaseBdev3", 00:18:20.327 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:20.327 "is_configured": true, 00:18:20.327 "data_offset": 2048, 00:18:20.327 "data_size": 63488 00:18:20.327 } 00:18:20.327 ] 00:18:20.327 }' 00:18:20.327 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.327 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.327 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.584 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.584 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:20.584 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.584 20:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.584 [2024-10-17 20:15:06.003373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.584 [2024-10-17 20:15:06.052552] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:20.584 [2024-10-17 20:15:06.052649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.584 [2024-10-17 20:15:06.052675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.584 [2024-10-17 20:15:06.052685] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.584 "name": "raid_bdev1", 00:18:20.584 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:20.584 "strip_size_kb": 64, 00:18:20.584 "state": "online", 00:18:20.584 "raid_level": "raid5f", 00:18:20.584 "superblock": true, 00:18:20.584 "num_base_bdevs": 3, 00:18:20.584 "num_base_bdevs_discovered": 2, 00:18:20.584 "num_base_bdevs_operational": 2, 00:18:20.584 "base_bdevs_list": [ 00:18:20.584 { 00:18:20.584 "name": null, 00:18:20.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.584 "is_configured": false, 00:18:20.584 "data_offset": 0, 00:18:20.584 "data_size": 63488 00:18:20.584 }, 00:18:20.584 { 00:18:20.584 "name": "BaseBdev2", 00:18:20.584 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:20.584 "is_configured": true, 00:18:20.584 "data_offset": 2048, 00:18:20.584 "data_size": 63488 00:18:20.584 }, 00:18:20.584 { 00:18:20.584 "name": "BaseBdev3", 00:18:20.584 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:20.584 "is_configured": true, 00:18:20.584 "data_offset": 2048, 00:18:20.584 "data_size": 63488 00:18:20.584 } 00:18:20.584 ] 00:18:20.584 }' 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.584 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.149 "name": "raid_bdev1", 00:18:21.149 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:21.149 "strip_size_kb": 64, 00:18:21.149 "state": "online", 00:18:21.149 "raid_level": "raid5f", 00:18:21.149 "superblock": true, 00:18:21.149 "num_base_bdevs": 3, 00:18:21.149 "num_base_bdevs_discovered": 2, 00:18:21.149 "num_base_bdevs_operational": 2, 00:18:21.149 "base_bdevs_list": [ 00:18:21.149 { 00:18:21.149 "name": null, 00:18:21.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.149 "is_configured": false, 00:18:21.149 "data_offset": 0, 00:18:21.149 "data_size": 63488 00:18:21.149 }, 00:18:21.149 { 00:18:21.149 "name": "BaseBdev2", 00:18:21.149 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:21.149 "is_configured": true, 00:18:21.149 "data_offset": 2048, 00:18:21.149 "data_size": 63488 00:18:21.149 }, 00:18:21.149 { 00:18:21.149 "name": "BaseBdev3", 00:18:21.149 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:21.149 "is_configured": true, 00:18:21.149 "data_offset": 2048, 00:18:21.149 "data_size": 63488 00:18:21.149 } 00:18:21.149 ] 00:18:21.149 }' 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.149 [2024-10-17 20:15:06.785160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:21.149 [2024-10-17 20:15:06.785226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.149 [2024-10-17 20:15:06.785262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:21.149 [2024-10-17 20:15:06.785276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.149 [2024-10-17 20:15:06.785877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.149 [2024-10-17 20:15:06.785908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:21.149 [2024-10-17 20:15:06.786039] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:21.149 [2024-10-17 20:15:06.786060] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:21.149 [2024-10-17 20:15:06.786094] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:21.149 [2024-10-17 20:15:06.786110] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:21.149 BaseBdev1 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.149 20:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:22.522 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:22.522 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.522 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.522 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.522 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.523 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:22.523 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.523 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.523 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.523 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.523 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.523 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.523 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.523 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.523 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.523 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.523 "name": "raid_bdev1", 00:18:22.523 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:22.523 "strip_size_kb": 64, 00:18:22.523 "state": "online", 00:18:22.523 "raid_level": "raid5f", 00:18:22.523 "superblock": true, 00:18:22.523 "num_base_bdevs": 3, 00:18:22.523 "num_base_bdevs_discovered": 2, 00:18:22.523 "num_base_bdevs_operational": 2, 00:18:22.523 "base_bdevs_list": [ 00:18:22.523 { 00:18:22.523 "name": null, 00:18:22.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.523 "is_configured": false, 00:18:22.523 "data_offset": 0, 00:18:22.523 "data_size": 63488 00:18:22.523 }, 00:18:22.523 { 00:18:22.523 "name": "BaseBdev2", 00:18:22.523 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:22.523 "is_configured": true, 00:18:22.523 "data_offset": 2048, 00:18:22.523 "data_size": 63488 00:18:22.523 }, 00:18:22.523 { 00:18:22.523 "name": "BaseBdev3", 00:18:22.523 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:22.523 "is_configured": true, 00:18:22.523 "data_offset": 2048, 00:18:22.523 "data_size": 63488 00:18:22.523 } 00:18:22.523 ] 00:18:22.523 }' 00:18:22.523 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.523 20:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.806 "name": "raid_bdev1", 00:18:22.806 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:22.806 "strip_size_kb": 64, 00:18:22.806 "state": "online", 00:18:22.806 "raid_level": "raid5f", 00:18:22.806 "superblock": true, 00:18:22.806 "num_base_bdevs": 3, 00:18:22.806 "num_base_bdevs_discovered": 2, 00:18:22.806 "num_base_bdevs_operational": 2, 00:18:22.806 "base_bdevs_list": [ 00:18:22.806 { 00:18:22.806 "name": null, 00:18:22.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.806 "is_configured": false, 00:18:22.806 "data_offset": 0, 00:18:22.806 "data_size": 63488 00:18:22.806 }, 00:18:22.806 { 00:18:22.806 "name": "BaseBdev2", 00:18:22.806 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:22.806 "is_configured": true, 00:18:22.806 "data_offset": 2048, 00:18:22.806 "data_size": 63488 00:18:22.806 }, 00:18:22.806 { 00:18:22.806 "name": "BaseBdev3", 00:18:22.806 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:22.806 "is_configured": true, 00:18:22.806 "data_offset": 2048, 00:18:22.806 "data_size": 63488 00:18:22.806 } 00:18:22.806 ] 00:18:22.806 }' 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.806 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.068 [2024-10-17 20:15:08.493974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.068 [2024-10-17 20:15:08.494235] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:23.068 [2024-10-17 20:15:08.494260] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:23.068 request: 00:18:23.068 { 00:18:23.068 "base_bdev": "BaseBdev1", 00:18:23.068 "raid_bdev": "raid_bdev1", 00:18:23.068 "method": "bdev_raid_add_base_bdev", 00:18:23.068 "req_id": 1 00:18:23.068 } 00:18:23.068 Got JSON-RPC error response 00:18:23.068 response: 00:18:23.068 { 00:18:23.068 "code": -22, 00:18:23.068 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:23.068 } 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:23.068 20:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.003 "name": "raid_bdev1", 00:18:24.003 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:24.003 "strip_size_kb": 64, 00:18:24.003 "state": "online", 00:18:24.003 "raid_level": "raid5f", 00:18:24.003 "superblock": true, 00:18:24.003 "num_base_bdevs": 3, 00:18:24.003 "num_base_bdevs_discovered": 2, 00:18:24.003 "num_base_bdevs_operational": 2, 00:18:24.003 "base_bdevs_list": [ 00:18:24.003 { 00:18:24.003 "name": null, 00:18:24.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.003 "is_configured": false, 00:18:24.003 "data_offset": 0, 00:18:24.003 "data_size": 63488 00:18:24.003 }, 00:18:24.003 { 00:18:24.003 "name": "BaseBdev2", 00:18:24.003 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:24.003 "is_configured": true, 00:18:24.003 "data_offset": 2048, 00:18:24.003 "data_size": 63488 00:18:24.003 }, 00:18:24.003 { 00:18:24.003 "name": "BaseBdev3", 00:18:24.003 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:24.003 "is_configured": true, 00:18:24.003 "data_offset": 2048, 00:18:24.003 "data_size": 63488 00:18:24.003 } 00:18:24.003 ] 00:18:24.003 }' 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.003 20:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.570 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.570 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.570 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.570 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.570 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.570 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.570 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.570 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.570 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.570 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.570 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.570 "name": "raid_bdev1", 00:18:24.570 "uuid": "e0b86f7b-b156-46a4-814e-196ecd7ee788", 00:18:24.570 "strip_size_kb": 64, 00:18:24.570 "state": "online", 00:18:24.570 "raid_level": "raid5f", 00:18:24.570 "superblock": true, 00:18:24.570 "num_base_bdevs": 3, 00:18:24.570 "num_base_bdevs_discovered": 2, 00:18:24.570 "num_base_bdevs_operational": 2, 00:18:24.570 "base_bdevs_list": [ 00:18:24.570 { 00:18:24.570 "name": null, 00:18:24.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.571 "is_configured": false, 00:18:24.571 "data_offset": 0, 00:18:24.571 "data_size": 63488 00:18:24.571 }, 00:18:24.571 { 00:18:24.571 "name": "BaseBdev2", 00:18:24.571 "uuid": "8572f1ef-188a-592e-9be7-6c2f34ce0bae", 00:18:24.571 "is_configured": true, 00:18:24.571 "data_offset": 2048, 00:18:24.571 "data_size": 63488 00:18:24.571 }, 00:18:24.571 { 00:18:24.571 "name": "BaseBdev3", 00:18:24.571 "uuid": "5668721d-e418-5ed9-aedf-af814f81a5a5", 00:18:24.571 "is_configured": true, 00:18:24.571 "data_offset": 2048, 00:18:24.571 "data_size": 63488 00:18:24.571 } 00:18:24.571 ] 00:18:24.571 }' 00:18:24.571 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.571 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.571 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.571 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.571 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82290 00:18:24.571 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82290 ']' 00:18:24.571 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 82290 00:18:24.571 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:24.571 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:24.571 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82290 00:18:24.829 killing process with pid 82290 00:18:24.829 Received shutdown signal, test time was about 60.000000 seconds 00:18:24.829 00:18:24.829 Latency(us) 00:18:24.829 [2024-10-17T20:15:10.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.829 [2024-10-17T20:15:10.483Z] =================================================================================================================== 00:18:24.829 [2024-10-17T20:15:10.483Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:24.829 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:24.829 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:24.829 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82290' 00:18:24.829 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 82290 00:18:24.829 [2024-10-17 20:15:10.236669] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.829 20:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 82290 00:18:24.829 [2024-10-17 20:15:10.236828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.829 [2024-10-17 20:15:10.236912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.829 [2024-10-17 20:15:10.236934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:25.087 [2024-10-17 20:15:10.581274] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.021 ************************************ 00:18:26.021 END TEST raid5f_rebuild_test_sb 00:18:26.021 ************************************ 00:18:26.021 20:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:26.021 00:18:26.021 real 0m24.975s 00:18:26.021 user 0m33.429s 00:18:26.021 sys 0m2.689s 00:18:26.021 20:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.021 20:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.021 20:15:11 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:26.021 20:15:11 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:18:26.021 20:15:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:26.021 20:15:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.021 20:15:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.022 ************************************ 00:18:26.022 START TEST raid5f_state_function_test 00:18:26.022 ************************************ 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83059 00:18:26.022 Process raid pid: 83059 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83059' 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83059 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83059 ']' 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:26.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:26.022 20:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.283 [2024-10-17 20:15:11.725286] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:18:26.283 [2024-10-17 20:15:11.726350] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.283 [2024-10-17 20:15:11.912865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.542 [2024-10-17 20:15:12.046181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.801 [2024-10-17 20:15:12.246310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.801 [2024-10-17 20:15:12.246373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.369 [2024-10-17 20:15:12.765120] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:27.369 [2024-10-17 20:15:12.765184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:27.369 [2024-10-17 20:15:12.765202] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.369 [2024-10-17 20:15:12.765234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.369 [2024-10-17 20:15:12.765244] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:27.369 [2024-10-17 20:15:12.765259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:27.369 [2024-10-17 20:15:12.765270] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:27.369 [2024-10-17 20:15:12.765284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.369 "name": "Existed_Raid", 00:18:27.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.369 "strip_size_kb": 64, 00:18:27.369 "state": "configuring", 00:18:27.369 "raid_level": "raid5f", 00:18:27.369 "superblock": false, 00:18:27.369 "num_base_bdevs": 4, 00:18:27.369 "num_base_bdevs_discovered": 0, 00:18:27.369 "num_base_bdevs_operational": 4, 00:18:27.369 "base_bdevs_list": [ 00:18:27.369 { 00:18:27.369 "name": "BaseBdev1", 00:18:27.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.369 "is_configured": false, 00:18:27.369 "data_offset": 0, 00:18:27.369 "data_size": 0 00:18:27.369 }, 00:18:27.369 { 00:18:27.369 "name": "BaseBdev2", 00:18:27.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.369 "is_configured": false, 00:18:27.369 "data_offset": 0, 00:18:27.369 "data_size": 0 00:18:27.369 }, 00:18:27.369 { 00:18:27.369 "name": "BaseBdev3", 00:18:27.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.369 "is_configured": false, 00:18:27.369 "data_offset": 0, 00:18:27.369 "data_size": 0 00:18:27.369 }, 00:18:27.369 { 00:18:27.369 "name": "BaseBdev4", 00:18:27.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.369 "is_configured": false, 00:18:27.369 "data_offset": 0, 00:18:27.369 "data_size": 0 00:18:27.369 } 00:18:27.369 ] 00:18:27.369 }' 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.369 20:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.937 [2024-10-17 20:15:13.309197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:27.937 [2024-10-17 20:15:13.309267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.937 [2024-10-17 20:15:13.321264] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:27.937 [2024-10-17 20:15:13.321508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:27.937 [2024-10-17 20:15:13.321628] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.937 [2024-10-17 20:15:13.321689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.937 [2024-10-17 20:15:13.321897] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:27.937 [2024-10-17 20:15:13.321957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:27.937 [2024-10-17 20:15:13.322032] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:27.937 [2024-10-17 20:15:13.322211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.937 [2024-10-17 20:15:13.367744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.937 BaseBdev1 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.937 [ 00:18:27.937 { 00:18:27.937 "name": "BaseBdev1", 00:18:27.937 "aliases": [ 00:18:27.937 "a42b591b-f28b-4b82-841c-ea36a73c638c" 00:18:27.937 ], 00:18:27.937 "product_name": "Malloc disk", 00:18:27.937 "block_size": 512, 00:18:27.937 "num_blocks": 65536, 00:18:27.937 "uuid": "a42b591b-f28b-4b82-841c-ea36a73c638c", 00:18:27.937 "assigned_rate_limits": { 00:18:27.937 "rw_ios_per_sec": 0, 00:18:27.937 "rw_mbytes_per_sec": 0, 00:18:27.937 "r_mbytes_per_sec": 0, 00:18:27.937 "w_mbytes_per_sec": 0 00:18:27.937 }, 00:18:27.937 "claimed": true, 00:18:27.937 "claim_type": "exclusive_write", 00:18:27.937 "zoned": false, 00:18:27.937 "supported_io_types": { 00:18:27.937 "read": true, 00:18:27.937 "write": true, 00:18:27.937 "unmap": true, 00:18:27.937 "flush": true, 00:18:27.937 "reset": true, 00:18:27.937 "nvme_admin": false, 00:18:27.937 "nvme_io": false, 00:18:27.937 "nvme_io_md": false, 00:18:27.937 "write_zeroes": true, 00:18:27.937 "zcopy": true, 00:18:27.937 "get_zone_info": false, 00:18:27.937 "zone_management": false, 00:18:27.937 "zone_append": false, 00:18:27.937 "compare": false, 00:18:27.937 "compare_and_write": false, 00:18:27.937 "abort": true, 00:18:27.937 "seek_hole": false, 00:18:27.937 "seek_data": false, 00:18:27.937 "copy": true, 00:18:27.937 "nvme_iov_md": false 00:18:27.937 }, 00:18:27.937 "memory_domains": [ 00:18:27.937 { 00:18:27.937 "dma_device_id": "system", 00:18:27.937 "dma_device_type": 1 00:18:27.937 }, 00:18:27.937 { 00:18:27.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.937 "dma_device_type": 2 00:18:27.937 } 00:18:27.937 ], 00:18:27.937 "driver_specific": {} 00:18:27.937 } 00:18:27.937 ] 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.937 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.938 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.938 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.938 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.938 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.938 "name": "Existed_Raid", 00:18:27.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.938 "strip_size_kb": 64, 00:18:27.938 "state": "configuring", 00:18:27.938 "raid_level": "raid5f", 00:18:27.938 "superblock": false, 00:18:27.938 "num_base_bdevs": 4, 00:18:27.938 "num_base_bdevs_discovered": 1, 00:18:27.938 "num_base_bdevs_operational": 4, 00:18:27.938 "base_bdevs_list": [ 00:18:27.938 { 00:18:27.938 "name": "BaseBdev1", 00:18:27.938 "uuid": "a42b591b-f28b-4b82-841c-ea36a73c638c", 00:18:27.938 "is_configured": true, 00:18:27.938 "data_offset": 0, 00:18:27.938 "data_size": 65536 00:18:27.938 }, 00:18:27.938 { 00:18:27.938 "name": "BaseBdev2", 00:18:27.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.938 "is_configured": false, 00:18:27.938 "data_offset": 0, 00:18:27.938 "data_size": 0 00:18:27.938 }, 00:18:27.938 { 00:18:27.938 "name": "BaseBdev3", 00:18:27.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.938 "is_configured": false, 00:18:27.938 "data_offset": 0, 00:18:27.938 "data_size": 0 00:18:27.938 }, 00:18:27.938 { 00:18:27.938 "name": "BaseBdev4", 00:18:27.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.938 "is_configured": false, 00:18:27.938 "data_offset": 0, 00:18:27.938 "data_size": 0 00:18:27.938 } 00:18:27.938 ] 00:18:27.938 }' 00:18:27.938 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.938 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.505 [2024-10-17 20:15:13.903939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:28.505 [2024-10-17 20:15:13.904006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.505 [2024-10-17 20:15:13.916071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.505 [2024-10-17 20:15:13.918503] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:28.505 [2024-10-17 20:15:13.918675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:28.505 [2024-10-17 20:15:13.918703] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:28.505 [2024-10-17 20:15:13.918724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:28.505 [2024-10-17 20:15:13.918736] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:28.505 [2024-10-17 20:15:13.918751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:28.505 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.506 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.506 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.506 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.506 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.506 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.506 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.506 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.506 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.506 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.506 "name": "Existed_Raid", 00:18:28.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.506 "strip_size_kb": 64, 00:18:28.506 "state": "configuring", 00:18:28.506 "raid_level": "raid5f", 00:18:28.506 "superblock": false, 00:18:28.506 "num_base_bdevs": 4, 00:18:28.506 "num_base_bdevs_discovered": 1, 00:18:28.506 "num_base_bdevs_operational": 4, 00:18:28.506 "base_bdevs_list": [ 00:18:28.506 { 00:18:28.506 "name": "BaseBdev1", 00:18:28.506 "uuid": "a42b591b-f28b-4b82-841c-ea36a73c638c", 00:18:28.506 "is_configured": true, 00:18:28.506 "data_offset": 0, 00:18:28.506 "data_size": 65536 00:18:28.506 }, 00:18:28.506 { 00:18:28.506 "name": "BaseBdev2", 00:18:28.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.506 "is_configured": false, 00:18:28.506 "data_offset": 0, 00:18:28.506 "data_size": 0 00:18:28.506 }, 00:18:28.506 { 00:18:28.506 "name": "BaseBdev3", 00:18:28.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.506 "is_configured": false, 00:18:28.506 "data_offset": 0, 00:18:28.506 "data_size": 0 00:18:28.506 }, 00:18:28.506 { 00:18:28.506 "name": "BaseBdev4", 00:18:28.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.506 "is_configured": false, 00:18:28.506 "data_offset": 0, 00:18:28.506 "data_size": 0 00:18:28.506 } 00:18:28.506 ] 00:18:28.506 }' 00:18:28.506 20:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.506 20:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.073 [2024-10-17 20:15:14.486141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:29.073 BaseBdev2 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.073 [ 00:18:29.073 { 00:18:29.073 "name": "BaseBdev2", 00:18:29.073 "aliases": [ 00:18:29.073 "565ba52d-379f-49a0-932d-56763eeb25fa" 00:18:29.073 ], 00:18:29.073 "product_name": "Malloc disk", 00:18:29.073 "block_size": 512, 00:18:29.073 "num_blocks": 65536, 00:18:29.073 "uuid": "565ba52d-379f-49a0-932d-56763eeb25fa", 00:18:29.073 "assigned_rate_limits": { 00:18:29.073 "rw_ios_per_sec": 0, 00:18:29.073 "rw_mbytes_per_sec": 0, 00:18:29.073 "r_mbytes_per_sec": 0, 00:18:29.073 "w_mbytes_per_sec": 0 00:18:29.073 }, 00:18:29.073 "claimed": true, 00:18:29.073 "claim_type": "exclusive_write", 00:18:29.073 "zoned": false, 00:18:29.073 "supported_io_types": { 00:18:29.073 "read": true, 00:18:29.073 "write": true, 00:18:29.073 "unmap": true, 00:18:29.073 "flush": true, 00:18:29.073 "reset": true, 00:18:29.073 "nvme_admin": false, 00:18:29.073 "nvme_io": false, 00:18:29.073 "nvme_io_md": false, 00:18:29.073 "write_zeroes": true, 00:18:29.073 "zcopy": true, 00:18:29.073 "get_zone_info": false, 00:18:29.073 "zone_management": false, 00:18:29.073 "zone_append": false, 00:18:29.073 "compare": false, 00:18:29.073 "compare_and_write": false, 00:18:29.073 "abort": true, 00:18:29.073 "seek_hole": false, 00:18:29.073 "seek_data": false, 00:18:29.073 "copy": true, 00:18:29.073 "nvme_iov_md": false 00:18:29.073 }, 00:18:29.073 "memory_domains": [ 00:18:29.073 { 00:18:29.073 "dma_device_id": "system", 00:18:29.073 "dma_device_type": 1 00:18:29.073 }, 00:18:29.073 { 00:18:29.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.073 "dma_device_type": 2 00:18:29.073 } 00:18:29.073 ], 00:18:29.073 "driver_specific": {} 00:18:29.073 } 00:18:29.073 ] 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.073 "name": "Existed_Raid", 00:18:29.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.073 "strip_size_kb": 64, 00:18:29.073 "state": "configuring", 00:18:29.073 "raid_level": "raid5f", 00:18:29.073 "superblock": false, 00:18:29.073 "num_base_bdevs": 4, 00:18:29.073 "num_base_bdevs_discovered": 2, 00:18:29.073 "num_base_bdevs_operational": 4, 00:18:29.073 "base_bdevs_list": [ 00:18:29.073 { 00:18:29.073 "name": "BaseBdev1", 00:18:29.073 "uuid": "a42b591b-f28b-4b82-841c-ea36a73c638c", 00:18:29.073 "is_configured": true, 00:18:29.073 "data_offset": 0, 00:18:29.073 "data_size": 65536 00:18:29.073 }, 00:18:29.073 { 00:18:29.073 "name": "BaseBdev2", 00:18:29.073 "uuid": "565ba52d-379f-49a0-932d-56763eeb25fa", 00:18:29.073 "is_configured": true, 00:18:29.073 "data_offset": 0, 00:18:29.073 "data_size": 65536 00:18:29.073 }, 00:18:29.073 { 00:18:29.073 "name": "BaseBdev3", 00:18:29.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.073 "is_configured": false, 00:18:29.073 "data_offset": 0, 00:18:29.073 "data_size": 0 00:18:29.073 }, 00:18:29.073 { 00:18:29.073 "name": "BaseBdev4", 00:18:29.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.073 "is_configured": false, 00:18:29.073 "data_offset": 0, 00:18:29.073 "data_size": 0 00:18:29.073 } 00:18:29.073 ] 00:18:29.073 }' 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.073 20:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.641 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.642 [2024-10-17 20:15:15.103063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:29.642 BaseBdev3 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.642 [ 00:18:29.642 { 00:18:29.642 "name": "BaseBdev3", 00:18:29.642 "aliases": [ 00:18:29.642 "380ec9f7-deb5-4e3a-9b1a-915be41266a3" 00:18:29.642 ], 00:18:29.642 "product_name": "Malloc disk", 00:18:29.642 "block_size": 512, 00:18:29.642 "num_blocks": 65536, 00:18:29.642 "uuid": "380ec9f7-deb5-4e3a-9b1a-915be41266a3", 00:18:29.642 "assigned_rate_limits": { 00:18:29.642 "rw_ios_per_sec": 0, 00:18:29.642 "rw_mbytes_per_sec": 0, 00:18:29.642 "r_mbytes_per_sec": 0, 00:18:29.642 "w_mbytes_per_sec": 0 00:18:29.642 }, 00:18:29.642 "claimed": true, 00:18:29.642 "claim_type": "exclusive_write", 00:18:29.642 "zoned": false, 00:18:29.642 "supported_io_types": { 00:18:29.642 "read": true, 00:18:29.642 "write": true, 00:18:29.642 "unmap": true, 00:18:29.642 "flush": true, 00:18:29.642 "reset": true, 00:18:29.642 "nvme_admin": false, 00:18:29.642 "nvme_io": false, 00:18:29.642 "nvme_io_md": false, 00:18:29.642 "write_zeroes": true, 00:18:29.642 "zcopy": true, 00:18:29.642 "get_zone_info": false, 00:18:29.642 "zone_management": false, 00:18:29.642 "zone_append": false, 00:18:29.642 "compare": false, 00:18:29.642 "compare_and_write": false, 00:18:29.642 "abort": true, 00:18:29.642 "seek_hole": false, 00:18:29.642 "seek_data": false, 00:18:29.642 "copy": true, 00:18:29.642 "nvme_iov_md": false 00:18:29.642 }, 00:18:29.642 "memory_domains": [ 00:18:29.642 { 00:18:29.642 "dma_device_id": "system", 00:18:29.642 "dma_device_type": 1 00:18:29.642 }, 00:18:29.642 { 00:18:29.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.642 "dma_device_type": 2 00:18:29.642 } 00:18:29.642 ], 00:18:29.642 "driver_specific": {} 00:18:29.642 } 00:18:29.642 ] 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.642 "name": "Existed_Raid", 00:18:29.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.642 "strip_size_kb": 64, 00:18:29.642 "state": "configuring", 00:18:29.642 "raid_level": "raid5f", 00:18:29.642 "superblock": false, 00:18:29.642 "num_base_bdevs": 4, 00:18:29.642 "num_base_bdevs_discovered": 3, 00:18:29.642 "num_base_bdevs_operational": 4, 00:18:29.642 "base_bdevs_list": [ 00:18:29.642 { 00:18:29.642 "name": "BaseBdev1", 00:18:29.642 "uuid": "a42b591b-f28b-4b82-841c-ea36a73c638c", 00:18:29.642 "is_configured": true, 00:18:29.642 "data_offset": 0, 00:18:29.642 "data_size": 65536 00:18:29.642 }, 00:18:29.642 { 00:18:29.642 "name": "BaseBdev2", 00:18:29.642 "uuid": "565ba52d-379f-49a0-932d-56763eeb25fa", 00:18:29.642 "is_configured": true, 00:18:29.642 "data_offset": 0, 00:18:29.642 "data_size": 65536 00:18:29.642 }, 00:18:29.642 { 00:18:29.642 "name": "BaseBdev3", 00:18:29.642 "uuid": "380ec9f7-deb5-4e3a-9b1a-915be41266a3", 00:18:29.642 "is_configured": true, 00:18:29.642 "data_offset": 0, 00:18:29.642 "data_size": 65536 00:18:29.642 }, 00:18:29.642 { 00:18:29.642 "name": "BaseBdev4", 00:18:29.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.642 "is_configured": false, 00:18:29.642 "data_offset": 0, 00:18:29.642 "data_size": 0 00:18:29.642 } 00:18:29.642 ] 00:18:29.642 }' 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.642 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.209 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:30.209 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.209 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.209 [2024-10-17 20:15:15.722546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:30.209 [2024-10-17 20:15:15.722620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:30.209 [2024-10-17 20:15:15.722634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:30.210 [2024-10-17 20:15:15.722940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:30.210 [2024-10-17 20:15:15.729925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:30.210 [2024-10-17 20:15:15.729954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:30.210 [2024-10-17 20:15:15.730329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.210 BaseBdev4 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.210 [ 00:18:30.210 { 00:18:30.210 "name": "BaseBdev4", 00:18:30.210 "aliases": [ 00:18:30.210 "bf588a85-b4d2-4b45-bc0d-1d5ee17c9ec9" 00:18:30.210 ], 00:18:30.210 "product_name": "Malloc disk", 00:18:30.210 "block_size": 512, 00:18:30.210 "num_blocks": 65536, 00:18:30.210 "uuid": "bf588a85-b4d2-4b45-bc0d-1d5ee17c9ec9", 00:18:30.210 "assigned_rate_limits": { 00:18:30.210 "rw_ios_per_sec": 0, 00:18:30.210 "rw_mbytes_per_sec": 0, 00:18:30.210 "r_mbytes_per_sec": 0, 00:18:30.210 "w_mbytes_per_sec": 0 00:18:30.210 }, 00:18:30.210 "claimed": true, 00:18:30.210 "claim_type": "exclusive_write", 00:18:30.210 "zoned": false, 00:18:30.210 "supported_io_types": { 00:18:30.210 "read": true, 00:18:30.210 "write": true, 00:18:30.210 "unmap": true, 00:18:30.210 "flush": true, 00:18:30.210 "reset": true, 00:18:30.210 "nvme_admin": false, 00:18:30.210 "nvme_io": false, 00:18:30.210 "nvme_io_md": false, 00:18:30.210 "write_zeroes": true, 00:18:30.210 "zcopy": true, 00:18:30.210 "get_zone_info": false, 00:18:30.210 "zone_management": false, 00:18:30.210 "zone_append": false, 00:18:30.210 "compare": false, 00:18:30.210 "compare_and_write": false, 00:18:30.210 "abort": true, 00:18:30.210 "seek_hole": false, 00:18:30.210 "seek_data": false, 00:18:30.210 "copy": true, 00:18:30.210 "nvme_iov_md": false 00:18:30.210 }, 00:18:30.210 "memory_domains": [ 00:18:30.210 { 00:18:30.210 "dma_device_id": "system", 00:18:30.210 "dma_device_type": 1 00:18:30.210 }, 00:18:30.210 { 00:18:30.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.210 "dma_device_type": 2 00:18:30.210 } 00:18:30.210 ], 00:18:30.210 "driver_specific": {} 00:18:30.210 } 00:18:30.210 ] 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.210 "name": "Existed_Raid", 00:18:30.210 "uuid": "e543fb48-b792-459c-882e-98dfbaa261a7", 00:18:30.210 "strip_size_kb": 64, 00:18:30.210 "state": "online", 00:18:30.210 "raid_level": "raid5f", 00:18:30.210 "superblock": false, 00:18:30.210 "num_base_bdevs": 4, 00:18:30.210 "num_base_bdevs_discovered": 4, 00:18:30.210 "num_base_bdevs_operational": 4, 00:18:30.210 "base_bdevs_list": [ 00:18:30.210 { 00:18:30.210 "name": "BaseBdev1", 00:18:30.210 "uuid": "a42b591b-f28b-4b82-841c-ea36a73c638c", 00:18:30.210 "is_configured": true, 00:18:30.210 "data_offset": 0, 00:18:30.210 "data_size": 65536 00:18:30.210 }, 00:18:30.210 { 00:18:30.210 "name": "BaseBdev2", 00:18:30.210 "uuid": "565ba52d-379f-49a0-932d-56763eeb25fa", 00:18:30.210 "is_configured": true, 00:18:30.210 "data_offset": 0, 00:18:30.210 "data_size": 65536 00:18:30.210 }, 00:18:30.210 { 00:18:30.210 "name": "BaseBdev3", 00:18:30.210 "uuid": "380ec9f7-deb5-4e3a-9b1a-915be41266a3", 00:18:30.210 "is_configured": true, 00:18:30.210 "data_offset": 0, 00:18:30.210 "data_size": 65536 00:18:30.210 }, 00:18:30.210 { 00:18:30.210 "name": "BaseBdev4", 00:18:30.210 "uuid": "bf588a85-b4d2-4b45-bc0d-1d5ee17c9ec9", 00:18:30.210 "is_configured": true, 00:18:30.210 "data_offset": 0, 00:18:30.210 "data_size": 65536 00:18:30.210 } 00:18:30.210 ] 00:18:30.210 }' 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.210 20:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.777 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:30.777 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:30.777 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:30.777 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:30.777 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:30.777 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:30.777 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:30.777 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.777 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.777 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:30.777 [2024-10-17 20:15:16.281982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.777 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.778 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:30.778 "name": "Existed_Raid", 00:18:30.778 "aliases": [ 00:18:30.778 "e543fb48-b792-459c-882e-98dfbaa261a7" 00:18:30.778 ], 00:18:30.778 "product_name": "Raid Volume", 00:18:30.778 "block_size": 512, 00:18:30.778 "num_blocks": 196608, 00:18:30.778 "uuid": "e543fb48-b792-459c-882e-98dfbaa261a7", 00:18:30.778 "assigned_rate_limits": { 00:18:30.778 "rw_ios_per_sec": 0, 00:18:30.778 "rw_mbytes_per_sec": 0, 00:18:30.778 "r_mbytes_per_sec": 0, 00:18:30.778 "w_mbytes_per_sec": 0 00:18:30.778 }, 00:18:30.778 "claimed": false, 00:18:30.778 "zoned": false, 00:18:30.778 "supported_io_types": { 00:18:30.778 "read": true, 00:18:30.778 "write": true, 00:18:30.778 "unmap": false, 00:18:30.778 "flush": false, 00:18:30.778 "reset": true, 00:18:30.778 "nvme_admin": false, 00:18:30.778 "nvme_io": false, 00:18:30.778 "nvme_io_md": false, 00:18:30.778 "write_zeroes": true, 00:18:30.778 "zcopy": false, 00:18:30.778 "get_zone_info": false, 00:18:30.778 "zone_management": false, 00:18:30.778 "zone_append": false, 00:18:30.778 "compare": false, 00:18:30.778 "compare_and_write": false, 00:18:30.778 "abort": false, 00:18:30.778 "seek_hole": false, 00:18:30.778 "seek_data": false, 00:18:30.778 "copy": false, 00:18:30.778 "nvme_iov_md": false 00:18:30.778 }, 00:18:30.778 "driver_specific": { 00:18:30.778 "raid": { 00:18:30.778 "uuid": "e543fb48-b792-459c-882e-98dfbaa261a7", 00:18:30.778 "strip_size_kb": 64, 00:18:30.778 "state": "online", 00:18:30.778 "raid_level": "raid5f", 00:18:30.778 "superblock": false, 00:18:30.778 "num_base_bdevs": 4, 00:18:30.778 "num_base_bdevs_discovered": 4, 00:18:30.778 "num_base_bdevs_operational": 4, 00:18:30.778 "base_bdevs_list": [ 00:18:30.778 { 00:18:30.778 "name": "BaseBdev1", 00:18:30.778 "uuid": "a42b591b-f28b-4b82-841c-ea36a73c638c", 00:18:30.778 "is_configured": true, 00:18:30.778 "data_offset": 0, 00:18:30.778 "data_size": 65536 00:18:30.778 }, 00:18:30.778 { 00:18:30.778 "name": "BaseBdev2", 00:18:30.778 "uuid": "565ba52d-379f-49a0-932d-56763eeb25fa", 00:18:30.778 "is_configured": true, 00:18:30.778 "data_offset": 0, 00:18:30.778 "data_size": 65536 00:18:30.778 }, 00:18:30.778 { 00:18:30.778 "name": "BaseBdev3", 00:18:30.778 "uuid": "380ec9f7-deb5-4e3a-9b1a-915be41266a3", 00:18:30.778 "is_configured": true, 00:18:30.778 "data_offset": 0, 00:18:30.778 "data_size": 65536 00:18:30.778 }, 00:18:30.778 { 00:18:30.778 "name": "BaseBdev4", 00:18:30.778 "uuid": "bf588a85-b4d2-4b45-bc0d-1d5ee17c9ec9", 00:18:30.778 "is_configured": true, 00:18:30.778 "data_offset": 0, 00:18:30.778 "data_size": 65536 00:18:30.778 } 00:18:30.778 ] 00:18:30.778 } 00:18:30.778 } 00:18:30.778 }' 00:18:30.778 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:30.778 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:30.778 BaseBdev2 00:18:30.778 BaseBdev3 00:18:30.778 BaseBdev4' 00:18:30.778 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.036 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:31.036 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.036 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.036 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:31.036 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.036 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.036 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.036 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.037 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.037 [2024-10-17 20:15:16.665908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.295 "name": "Existed_Raid", 00:18:31.295 "uuid": "e543fb48-b792-459c-882e-98dfbaa261a7", 00:18:31.295 "strip_size_kb": 64, 00:18:31.295 "state": "online", 00:18:31.295 "raid_level": "raid5f", 00:18:31.295 "superblock": false, 00:18:31.295 "num_base_bdevs": 4, 00:18:31.295 "num_base_bdevs_discovered": 3, 00:18:31.295 "num_base_bdevs_operational": 3, 00:18:31.295 "base_bdevs_list": [ 00:18:31.295 { 00:18:31.295 "name": null, 00:18:31.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.295 "is_configured": false, 00:18:31.295 "data_offset": 0, 00:18:31.295 "data_size": 65536 00:18:31.295 }, 00:18:31.295 { 00:18:31.295 "name": "BaseBdev2", 00:18:31.295 "uuid": "565ba52d-379f-49a0-932d-56763eeb25fa", 00:18:31.295 "is_configured": true, 00:18:31.295 "data_offset": 0, 00:18:31.295 "data_size": 65536 00:18:31.295 }, 00:18:31.295 { 00:18:31.295 "name": "BaseBdev3", 00:18:31.295 "uuid": "380ec9f7-deb5-4e3a-9b1a-915be41266a3", 00:18:31.295 "is_configured": true, 00:18:31.295 "data_offset": 0, 00:18:31.295 "data_size": 65536 00:18:31.295 }, 00:18:31.295 { 00:18:31.295 "name": "BaseBdev4", 00:18:31.295 "uuid": "bf588a85-b4d2-4b45-bc0d-1d5ee17c9ec9", 00:18:31.295 "is_configured": true, 00:18:31.295 "data_offset": 0, 00:18:31.295 "data_size": 65536 00:18:31.295 } 00:18:31.295 ] 00:18:31.295 }' 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.295 20:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.861 [2024-10-17 20:15:17.353732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:31.861 [2024-10-17 20:15:17.353853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.861 [2024-10-17 20:15:17.436592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.861 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.861 [2024-10-17 20:15:17.496632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.119 [2024-10-17 20:15:17.640662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:32.119 [2024-10-17 20:15:17.640730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.119 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.378 BaseBdev2 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.378 [ 00:18:32.378 { 00:18:32.378 "name": "BaseBdev2", 00:18:32.378 "aliases": [ 00:18:32.378 "a9ea5370-27bd-4d99-925c-6b913611d19a" 00:18:32.378 ], 00:18:32.378 "product_name": "Malloc disk", 00:18:32.378 "block_size": 512, 00:18:32.378 "num_blocks": 65536, 00:18:32.378 "uuid": "a9ea5370-27bd-4d99-925c-6b913611d19a", 00:18:32.378 "assigned_rate_limits": { 00:18:32.378 "rw_ios_per_sec": 0, 00:18:32.378 "rw_mbytes_per_sec": 0, 00:18:32.378 "r_mbytes_per_sec": 0, 00:18:32.378 "w_mbytes_per_sec": 0 00:18:32.378 }, 00:18:32.378 "claimed": false, 00:18:32.378 "zoned": false, 00:18:32.378 "supported_io_types": { 00:18:32.378 "read": true, 00:18:32.378 "write": true, 00:18:32.378 "unmap": true, 00:18:32.378 "flush": true, 00:18:32.378 "reset": true, 00:18:32.378 "nvme_admin": false, 00:18:32.378 "nvme_io": false, 00:18:32.378 "nvme_io_md": false, 00:18:32.378 "write_zeroes": true, 00:18:32.378 "zcopy": true, 00:18:32.378 "get_zone_info": false, 00:18:32.378 "zone_management": false, 00:18:32.378 "zone_append": false, 00:18:32.378 "compare": false, 00:18:32.378 "compare_and_write": false, 00:18:32.378 "abort": true, 00:18:32.378 "seek_hole": false, 00:18:32.378 "seek_data": false, 00:18:32.378 "copy": true, 00:18:32.378 "nvme_iov_md": false 00:18:32.378 }, 00:18:32.378 "memory_domains": [ 00:18:32.378 { 00:18:32.378 "dma_device_id": "system", 00:18:32.378 "dma_device_type": 1 00:18:32.378 }, 00:18:32.378 { 00:18:32.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.378 "dma_device_type": 2 00:18:32.378 } 00:18:32.378 ], 00:18:32.378 "driver_specific": {} 00:18:32.378 } 00:18:32.378 ] 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.378 BaseBdev3 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.378 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.378 [ 00:18:32.378 { 00:18:32.378 "name": "BaseBdev3", 00:18:32.378 "aliases": [ 00:18:32.378 "e28d2b16-6a29-4579-9a36-9b2f21abf5e2" 00:18:32.378 ], 00:18:32.378 "product_name": "Malloc disk", 00:18:32.378 "block_size": 512, 00:18:32.378 "num_blocks": 65536, 00:18:32.378 "uuid": "e28d2b16-6a29-4579-9a36-9b2f21abf5e2", 00:18:32.378 "assigned_rate_limits": { 00:18:32.378 "rw_ios_per_sec": 0, 00:18:32.378 "rw_mbytes_per_sec": 0, 00:18:32.378 "r_mbytes_per_sec": 0, 00:18:32.378 "w_mbytes_per_sec": 0 00:18:32.378 }, 00:18:32.378 "claimed": false, 00:18:32.378 "zoned": false, 00:18:32.378 "supported_io_types": { 00:18:32.378 "read": true, 00:18:32.378 "write": true, 00:18:32.378 "unmap": true, 00:18:32.378 "flush": true, 00:18:32.378 "reset": true, 00:18:32.378 "nvme_admin": false, 00:18:32.378 "nvme_io": false, 00:18:32.378 "nvme_io_md": false, 00:18:32.378 "write_zeroes": true, 00:18:32.378 "zcopy": true, 00:18:32.378 "get_zone_info": false, 00:18:32.378 "zone_management": false, 00:18:32.378 "zone_append": false, 00:18:32.378 "compare": false, 00:18:32.378 "compare_and_write": false, 00:18:32.378 "abort": true, 00:18:32.378 "seek_hole": false, 00:18:32.378 "seek_data": false, 00:18:32.378 "copy": true, 00:18:32.378 "nvme_iov_md": false 00:18:32.378 }, 00:18:32.378 "memory_domains": [ 00:18:32.378 { 00:18:32.378 "dma_device_id": "system", 00:18:32.378 "dma_device_type": 1 00:18:32.378 }, 00:18:32.378 { 00:18:32.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.379 "dma_device_type": 2 00:18:32.379 } 00:18:32.379 ], 00:18:32.379 "driver_specific": {} 00:18:32.379 } 00:18:32.379 ] 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.379 BaseBdev4 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.379 20:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.379 [ 00:18:32.379 { 00:18:32.379 "name": "BaseBdev4", 00:18:32.379 "aliases": [ 00:18:32.379 "44f6a829-0342-4f98-8b6e-e46544537874" 00:18:32.379 ], 00:18:32.379 "product_name": "Malloc disk", 00:18:32.379 "block_size": 512, 00:18:32.379 "num_blocks": 65536, 00:18:32.379 "uuid": "44f6a829-0342-4f98-8b6e-e46544537874", 00:18:32.379 "assigned_rate_limits": { 00:18:32.379 "rw_ios_per_sec": 0, 00:18:32.379 "rw_mbytes_per_sec": 0, 00:18:32.379 "r_mbytes_per_sec": 0, 00:18:32.379 "w_mbytes_per_sec": 0 00:18:32.379 }, 00:18:32.379 "claimed": false, 00:18:32.379 "zoned": false, 00:18:32.379 "supported_io_types": { 00:18:32.379 "read": true, 00:18:32.379 "write": true, 00:18:32.379 "unmap": true, 00:18:32.379 "flush": true, 00:18:32.379 "reset": true, 00:18:32.379 "nvme_admin": false, 00:18:32.379 "nvme_io": false, 00:18:32.379 "nvme_io_md": false, 00:18:32.379 "write_zeroes": true, 00:18:32.379 "zcopy": true, 00:18:32.379 "get_zone_info": false, 00:18:32.379 "zone_management": false, 00:18:32.379 "zone_append": false, 00:18:32.379 "compare": false, 00:18:32.379 "compare_and_write": false, 00:18:32.379 "abort": true, 00:18:32.379 "seek_hole": false, 00:18:32.379 "seek_data": false, 00:18:32.379 "copy": true, 00:18:32.379 "nvme_iov_md": false 00:18:32.379 }, 00:18:32.379 "memory_domains": [ 00:18:32.379 { 00:18:32.379 "dma_device_id": "system", 00:18:32.379 "dma_device_type": 1 00:18:32.379 }, 00:18:32.379 { 00:18:32.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.379 "dma_device_type": 2 00:18:32.379 } 00:18:32.379 ], 00:18:32.379 "driver_specific": {} 00:18:32.379 } 00:18:32.379 ] 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.379 [2024-10-17 20:15:18.008171] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:32.379 [2024-10-17 20:15:18.008407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:32.379 [2024-10-17 20:15:18.008571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:32.379 [2024-10-17 20:15:18.011082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:32.379 [2024-10-17 20:15:18.011155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.379 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.638 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.638 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.638 "name": "Existed_Raid", 00:18:32.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.638 "strip_size_kb": 64, 00:18:32.638 "state": "configuring", 00:18:32.638 "raid_level": "raid5f", 00:18:32.638 "superblock": false, 00:18:32.638 "num_base_bdevs": 4, 00:18:32.638 "num_base_bdevs_discovered": 3, 00:18:32.638 "num_base_bdevs_operational": 4, 00:18:32.638 "base_bdevs_list": [ 00:18:32.638 { 00:18:32.638 "name": "BaseBdev1", 00:18:32.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.638 "is_configured": false, 00:18:32.638 "data_offset": 0, 00:18:32.638 "data_size": 0 00:18:32.638 }, 00:18:32.638 { 00:18:32.638 "name": "BaseBdev2", 00:18:32.638 "uuid": "a9ea5370-27bd-4d99-925c-6b913611d19a", 00:18:32.638 "is_configured": true, 00:18:32.638 "data_offset": 0, 00:18:32.638 "data_size": 65536 00:18:32.638 }, 00:18:32.638 { 00:18:32.638 "name": "BaseBdev3", 00:18:32.638 "uuid": "e28d2b16-6a29-4579-9a36-9b2f21abf5e2", 00:18:32.638 "is_configured": true, 00:18:32.638 "data_offset": 0, 00:18:32.638 "data_size": 65536 00:18:32.638 }, 00:18:32.638 { 00:18:32.638 "name": "BaseBdev4", 00:18:32.638 "uuid": "44f6a829-0342-4f98-8b6e-e46544537874", 00:18:32.638 "is_configured": true, 00:18:32.638 "data_offset": 0, 00:18:32.638 "data_size": 65536 00:18:32.638 } 00:18:32.638 ] 00:18:32.638 }' 00:18:32.638 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.638 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.205 [2024-10-17 20:15:18.556393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.205 "name": "Existed_Raid", 00:18:33.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.205 "strip_size_kb": 64, 00:18:33.205 "state": "configuring", 00:18:33.205 "raid_level": "raid5f", 00:18:33.205 "superblock": false, 00:18:33.205 "num_base_bdevs": 4, 00:18:33.205 "num_base_bdevs_discovered": 2, 00:18:33.205 "num_base_bdevs_operational": 4, 00:18:33.205 "base_bdevs_list": [ 00:18:33.205 { 00:18:33.205 "name": "BaseBdev1", 00:18:33.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.205 "is_configured": false, 00:18:33.205 "data_offset": 0, 00:18:33.205 "data_size": 0 00:18:33.205 }, 00:18:33.205 { 00:18:33.205 "name": null, 00:18:33.205 "uuid": "a9ea5370-27bd-4d99-925c-6b913611d19a", 00:18:33.205 "is_configured": false, 00:18:33.205 "data_offset": 0, 00:18:33.205 "data_size": 65536 00:18:33.205 }, 00:18:33.205 { 00:18:33.205 "name": "BaseBdev3", 00:18:33.205 "uuid": "e28d2b16-6a29-4579-9a36-9b2f21abf5e2", 00:18:33.205 "is_configured": true, 00:18:33.205 "data_offset": 0, 00:18:33.205 "data_size": 65536 00:18:33.205 }, 00:18:33.205 { 00:18:33.205 "name": "BaseBdev4", 00:18:33.205 "uuid": "44f6a829-0342-4f98-8b6e-e46544537874", 00:18:33.205 "is_configured": true, 00:18:33.205 "data_offset": 0, 00:18:33.205 "data_size": 65536 00:18:33.205 } 00:18:33.205 ] 00:18:33.205 }' 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.205 20:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.478 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.478 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:33.478 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.478 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.478 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.762 [2024-10-17 20:15:19.180361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.762 BaseBdev1 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.762 [ 00:18:33.762 { 00:18:33.762 "name": "BaseBdev1", 00:18:33.762 "aliases": [ 00:18:33.762 "1fbb866d-cb6b-4eee-90f2-6d4ff5b80838" 00:18:33.762 ], 00:18:33.762 "product_name": "Malloc disk", 00:18:33.762 "block_size": 512, 00:18:33.762 "num_blocks": 65536, 00:18:33.762 "uuid": "1fbb866d-cb6b-4eee-90f2-6d4ff5b80838", 00:18:33.762 "assigned_rate_limits": { 00:18:33.762 "rw_ios_per_sec": 0, 00:18:33.762 "rw_mbytes_per_sec": 0, 00:18:33.762 "r_mbytes_per_sec": 0, 00:18:33.762 "w_mbytes_per_sec": 0 00:18:33.762 }, 00:18:33.762 "claimed": true, 00:18:33.762 "claim_type": "exclusive_write", 00:18:33.762 "zoned": false, 00:18:33.762 "supported_io_types": { 00:18:33.762 "read": true, 00:18:33.762 "write": true, 00:18:33.762 "unmap": true, 00:18:33.762 "flush": true, 00:18:33.762 "reset": true, 00:18:33.762 "nvme_admin": false, 00:18:33.762 "nvme_io": false, 00:18:33.762 "nvme_io_md": false, 00:18:33.762 "write_zeroes": true, 00:18:33.762 "zcopy": true, 00:18:33.762 "get_zone_info": false, 00:18:33.762 "zone_management": false, 00:18:33.762 "zone_append": false, 00:18:33.762 "compare": false, 00:18:33.762 "compare_and_write": false, 00:18:33.762 "abort": true, 00:18:33.762 "seek_hole": false, 00:18:33.762 "seek_data": false, 00:18:33.762 "copy": true, 00:18:33.762 "nvme_iov_md": false 00:18:33.762 }, 00:18:33.762 "memory_domains": [ 00:18:33.762 { 00:18:33.762 "dma_device_id": "system", 00:18:33.762 "dma_device_type": 1 00:18:33.762 }, 00:18:33.762 { 00:18:33.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.762 "dma_device_type": 2 00:18:33.762 } 00:18:33.762 ], 00:18:33.762 "driver_specific": {} 00:18:33.762 } 00:18:33.762 ] 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.762 "name": "Existed_Raid", 00:18:33.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.762 "strip_size_kb": 64, 00:18:33.762 "state": "configuring", 00:18:33.762 "raid_level": "raid5f", 00:18:33.762 "superblock": false, 00:18:33.762 "num_base_bdevs": 4, 00:18:33.762 "num_base_bdevs_discovered": 3, 00:18:33.762 "num_base_bdevs_operational": 4, 00:18:33.762 "base_bdevs_list": [ 00:18:33.762 { 00:18:33.762 "name": "BaseBdev1", 00:18:33.762 "uuid": "1fbb866d-cb6b-4eee-90f2-6d4ff5b80838", 00:18:33.762 "is_configured": true, 00:18:33.762 "data_offset": 0, 00:18:33.762 "data_size": 65536 00:18:33.762 }, 00:18:33.762 { 00:18:33.762 "name": null, 00:18:33.762 "uuid": "a9ea5370-27bd-4d99-925c-6b913611d19a", 00:18:33.762 "is_configured": false, 00:18:33.762 "data_offset": 0, 00:18:33.762 "data_size": 65536 00:18:33.762 }, 00:18:33.762 { 00:18:33.762 "name": "BaseBdev3", 00:18:33.762 "uuid": "e28d2b16-6a29-4579-9a36-9b2f21abf5e2", 00:18:33.762 "is_configured": true, 00:18:33.762 "data_offset": 0, 00:18:33.762 "data_size": 65536 00:18:33.762 }, 00:18:33.762 { 00:18:33.762 "name": "BaseBdev4", 00:18:33.762 "uuid": "44f6a829-0342-4f98-8b6e-e46544537874", 00:18:33.762 "is_configured": true, 00:18:33.762 "data_offset": 0, 00:18:33.762 "data_size": 65536 00:18:33.762 } 00:18:33.762 ] 00:18:33.762 }' 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.762 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.328 [2024-10-17 20:15:19.808671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.328 "name": "Existed_Raid", 00:18:34.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.328 "strip_size_kb": 64, 00:18:34.328 "state": "configuring", 00:18:34.328 "raid_level": "raid5f", 00:18:34.328 "superblock": false, 00:18:34.328 "num_base_bdevs": 4, 00:18:34.328 "num_base_bdevs_discovered": 2, 00:18:34.328 "num_base_bdevs_operational": 4, 00:18:34.328 "base_bdevs_list": [ 00:18:34.328 { 00:18:34.328 "name": "BaseBdev1", 00:18:34.328 "uuid": "1fbb866d-cb6b-4eee-90f2-6d4ff5b80838", 00:18:34.328 "is_configured": true, 00:18:34.328 "data_offset": 0, 00:18:34.328 "data_size": 65536 00:18:34.328 }, 00:18:34.328 { 00:18:34.328 "name": null, 00:18:34.328 "uuid": "a9ea5370-27bd-4d99-925c-6b913611d19a", 00:18:34.328 "is_configured": false, 00:18:34.328 "data_offset": 0, 00:18:34.328 "data_size": 65536 00:18:34.328 }, 00:18:34.328 { 00:18:34.328 "name": null, 00:18:34.328 "uuid": "e28d2b16-6a29-4579-9a36-9b2f21abf5e2", 00:18:34.328 "is_configured": false, 00:18:34.328 "data_offset": 0, 00:18:34.328 "data_size": 65536 00:18:34.328 }, 00:18:34.328 { 00:18:34.328 "name": "BaseBdev4", 00:18:34.328 "uuid": "44f6a829-0342-4f98-8b6e-e46544537874", 00:18:34.328 "is_configured": true, 00:18:34.328 "data_offset": 0, 00:18:34.328 "data_size": 65536 00:18:34.328 } 00:18:34.328 ] 00:18:34.328 }' 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.328 20:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.894 [2024-10-17 20:15:20.396852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.894 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.895 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.895 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.895 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.895 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.895 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.895 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.895 "name": "Existed_Raid", 00:18:34.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.895 "strip_size_kb": 64, 00:18:34.895 "state": "configuring", 00:18:34.895 "raid_level": "raid5f", 00:18:34.895 "superblock": false, 00:18:34.895 "num_base_bdevs": 4, 00:18:34.895 "num_base_bdevs_discovered": 3, 00:18:34.895 "num_base_bdevs_operational": 4, 00:18:34.895 "base_bdevs_list": [ 00:18:34.895 { 00:18:34.895 "name": "BaseBdev1", 00:18:34.895 "uuid": "1fbb866d-cb6b-4eee-90f2-6d4ff5b80838", 00:18:34.895 "is_configured": true, 00:18:34.895 "data_offset": 0, 00:18:34.895 "data_size": 65536 00:18:34.895 }, 00:18:34.895 { 00:18:34.895 "name": null, 00:18:34.895 "uuid": "a9ea5370-27bd-4d99-925c-6b913611d19a", 00:18:34.895 "is_configured": false, 00:18:34.895 "data_offset": 0, 00:18:34.895 "data_size": 65536 00:18:34.895 }, 00:18:34.895 { 00:18:34.895 "name": "BaseBdev3", 00:18:34.895 "uuid": "e28d2b16-6a29-4579-9a36-9b2f21abf5e2", 00:18:34.895 "is_configured": true, 00:18:34.895 "data_offset": 0, 00:18:34.895 "data_size": 65536 00:18:34.895 }, 00:18:34.895 { 00:18:34.895 "name": "BaseBdev4", 00:18:34.895 "uuid": "44f6a829-0342-4f98-8b6e-e46544537874", 00:18:34.895 "is_configured": true, 00:18:34.895 "data_offset": 0, 00:18:34.895 "data_size": 65536 00:18:34.895 } 00:18:34.895 ] 00:18:34.895 }' 00:18:34.895 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.895 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.461 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.461 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.461 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:35.462 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.462 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.462 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:35.462 20:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:35.462 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.462 20:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.462 [2024-10-17 20:15:20.965087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.462 "name": "Existed_Raid", 00:18:35.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.462 "strip_size_kb": 64, 00:18:35.462 "state": "configuring", 00:18:35.462 "raid_level": "raid5f", 00:18:35.462 "superblock": false, 00:18:35.462 "num_base_bdevs": 4, 00:18:35.462 "num_base_bdevs_discovered": 2, 00:18:35.462 "num_base_bdevs_operational": 4, 00:18:35.462 "base_bdevs_list": [ 00:18:35.462 { 00:18:35.462 "name": null, 00:18:35.462 "uuid": "1fbb866d-cb6b-4eee-90f2-6d4ff5b80838", 00:18:35.462 "is_configured": false, 00:18:35.462 "data_offset": 0, 00:18:35.462 "data_size": 65536 00:18:35.462 }, 00:18:35.462 { 00:18:35.462 "name": null, 00:18:35.462 "uuid": "a9ea5370-27bd-4d99-925c-6b913611d19a", 00:18:35.462 "is_configured": false, 00:18:35.462 "data_offset": 0, 00:18:35.462 "data_size": 65536 00:18:35.462 }, 00:18:35.462 { 00:18:35.462 "name": "BaseBdev3", 00:18:35.462 "uuid": "e28d2b16-6a29-4579-9a36-9b2f21abf5e2", 00:18:35.462 "is_configured": true, 00:18:35.462 "data_offset": 0, 00:18:35.462 "data_size": 65536 00:18:35.462 }, 00:18:35.462 { 00:18:35.462 "name": "BaseBdev4", 00:18:35.462 "uuid": "44f6a829-0342-4f98-8b6e-e46544537874", 00:18:35.462 "is_configured": true, 00:18:35.462 "data_offset": 0, 00:18:35.462 "data_size": 65536 00:18:35.462 } 00:18:35.462 ] 00:18:35.462 }' 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.462 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.029 [2024-10-17 20:15:21.626174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.029 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.287 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.287 "name": "Existed_Raid", 00:18:36.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.287 "strip_size_kb": 64, 00:18:36.287 "state": "configuring", 00:18:36.287 "raid_level": "raid5f", 00:18:36.287 "superblock": false, 00:18:36.287 "num_base_bdevs": 4, 00:18:36.287 "num_base_bdevs_discovered": 3, 00:18:36.287 "num_base_bdevs_operational": 4, 00:18:36.287 "base_bdevs_list": [ 00:18:36.287 { 00:18:36.287 "name": null, 00:18:36.287 "uuid": "1fbb866d-cb6b-4eee-90f2-6d4ff5b80838", 00:18:36.287 "is_configured": false, 00:18:36.287 "data_offset": 0, 00:18:36.287 "data_size": 65536 00:18:36.287 }, 00:18:36.287 { 00:18:36.287 "name": "BaseBdev2", 00:18:36.287 "uuid": "a9ea5370-27bd-4d99-925c-6b913611d19a", 00:18:36.287 "is_configured": true, 00:18:36.287 "data_offset": 0, 00:18:36.287 "data_size": 65536 00:18:36.287 }, 00:18:36.287 { 00:18:36.287 "name": "BaseBdev3", 00:18:36.287 "uuid": "e28d2b16-6a29-4579-9a36-9b2f21abf5e2", 00:18:36.287 "is_configured": true, 00:18:36.287 "data_offset": 0, 00:18:36.287 "data_size": 65536 00:18:36.287 }, 00:18:36.287 { 00:18:36.287 "name": "BaseBdev4", 00:18:36.287 "uuid": "44f6a829-0342-4f98-8b6e-e46544537874", 00:18:36.287 "is_configured": true, 00:18:36.287 "data_offset": 0, 00:18:36.287 "data_size": 65536 00:18:36.287 } 00:18:36.287 ] 00:18:36.287 }' 00:18:36.287 20:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.287 20:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.545 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.545 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:36.545 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.545 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.545 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1fbb866d-cb6b-4eee-90f2-6d4ff5b80838 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.804 [2024-10-17 20:15:22.297531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:36.804 [2024-10-17 20:15:22.297609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:36.804 [2024-10-17 20:15:22.297621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:36.804 [2024-10-17 20:15:22.297902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:36.804 [2024-10-17 20:15:22.303765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:36.804 [2024-10-17 20:15:22.303793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:36.804 [2024-10-17 20:15:22.304140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.804 NewBaseBdev 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.804 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.804 [ 00:18:36.804 { 00:18:36.804 "name": "NewBaseBdev", 00:18:36.804 "aliases": [ 00:18:36.804 "1fbb866d-cb6b-4eee-90f2-6d4ff5b80838" 00:18:36.804 ], 00:18:36.804 "product_name": "Malloc disk", 00:18:36.804 "block_size": 512, 00:18:36.804 "num_blocks": 65536, 00:18:36.804 "uuid": "1fbb866d-cb6b-4eee-90f2-6d4ff5b80838", 00:18:36.804 "assigned_rate_limits": { 00:18:36.804 "rw_ios_per_sec": 0, 00:18:36.804 "rw_mbytes_per_sec": 0, 00:18:36.804 "r_mbytes_per_sec": 0, 00:18:36.804 "w_mbytes_per_sec": 0 00:18:36.804 }, 00:18:36.804 "claimed": true, 00:18:36.804 "claim_type": "exclusive_write", 00:18:36.804 "zoned": false, 00:18:36.804 "supported_io_types": { 00:18:36.804 "read": true, 00:18:36.804 "write": true, 00:18:36.804 "unmap": true, 00:18:36.804 "flush": true, 00:18:36.804 "reset": true, 00:18:36.804 "nvme_admin": false, 00:18:36.804 "nvme_io": false, 00:18:36.804 "nvme_io_md": false, 00:18:36.804 "write_zeroes": true, 00:18:36.804 "zcopy": true, 00:18:36.804 "get_zone_info": false, 00:18:36.804 "zone_management": false, 00:18:36.804 "zone_append": false, 00:18:36.804 "compare": false, 00:18:36.804 "compare_and_write": false, 00:18:36.804 "abort": true, 00:18:36.804 "seek_hole": false, 00:18:36.804 "seek_data": false, 00:18:36.804 "copy": true, 00:18:36.804 "nvme_iov_md": false 00:18:36.804 }, 00:18:36.805 "memory_domains": [ 00:18:36.805 { 00:18:36.805 "dma_device_id": "system", 00:18:36.805 "dma_device_type": 1 00:18:36.805 }, 00:18:36.805 { 00:18:36.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.805 "dma_device_type": 2 00:18:36.805 } 00:18:36.805 ], 00:18:36.805 "driver_specific": {} 00:18:36.805 } 00:18:36.805 ] 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.805 "name": "Existed_Raid", 00:18:36.805 "uuid": "86d05812-6757-4cd6-ba48-197985f58bd1", 00:18:36.805 "strip_size_kb": 64, 00:18:36.805 "state": "online", 00:18:36.805 "raid_level": "raid5f", 00:18:36.805 "superblock": false, 00:18:36.805 "num_base_bdevs": 4, 00:18:36.805 "num_base_bdevs_discovered": 4, 00:18:36.805 "num_base_bdevs_operational": 4, 00:18:36.805 "base_bdevs_list": [ 00:18:36.805 { 00:18:36.805 "name": "NewBaseBdev", 00:18:36.805 "uuid": "1fbb866d-cb6b-4eee-90f2-6d4ff5b80838", 00:18:36.805 "is_configured": true, 00:18:36.805 "data_offset": 0, 00:18:36.805 "data_size": 65536 00:18:36.805 }, 00:18:36.805 { 00:18:36.805 "name": "BaseBdev2", 00:18:36.805 "uuid": "a9ea5370-27bd-4d99-925c-6b913611d19a", 00:18:36.805 "is_configured": true, 00:18:36.805 "data_offset": 0, 00:18:36.805 "data_size": 65536 00:18:36.805 }, 00:18:36.805 { 00:18:36.805 "name": "BaseBdev3", 00:18:36.805 "uuid": "e28d2b16-6a29-4579-9a36-9b2f21abf5e2", 00:18:36.805 "is_configured": true, 00:18:36.805 "data_offset": 0, 00:18:36.805 "data_size": 65536 00:18:36.805 }, 00:18:36.805 { 00:18:36.805 "name": "BaseBdev4", 00:18:36.805 "uuid": "44f6a829-0342-4f98-8b6e-e46544537874", 00:18:36.805 "is_configured": true, 00:18:36.805 "data_offset": 0, 00:18:36.805 "data_size": 65536 00:18:36.805 } 00:18:36.805 ] 00:18:36.805 }' 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.805 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.371 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:37.371 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:37.371 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:37.371 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:37.371 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:37.371 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:37.371 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:37.371 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.371 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.371 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:37.371 [2024-10-17 20:15:22.875460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.371 20:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.371 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:37.371 "name": "Existed_Raid", 00:18:37.371 "aliases": [ 00:18:37.371 "86d05812-6757-4cd6-ba48-197985f58bd1" 00:18:37.371 ], 00:18:37.371 "product_name": "Raid Volume", 00:18:37.371 "block_size": 512, 00:18:37.371 "num_blocks": 196608, 00:18:37.371 "uuid": "86d05812-6757-4cd6-ba48-197985f58bd1", 00:18:37.371 "assigned_rate_limits": { 00:18:37.371 "rw_ios_per_sec": 0, 00:18:37.371 "rw_mbytes_per_sec": 0, 00:18:37.372 "r_mbytes_per_sec": 0, 00:18:37.372 "w_mbytes_per_sec": 0 00:18:37.372 }, 00:18:37.372 "claimed": false, 00:18:37.372 "zoned": false, 00:18:37.372 "supported_io_types": { 00:18:37.372 "read": true, 00:18:37.372 "write": true, 00:18:37.372 "unmap": false, 00:18:37.372 "flush": false, 00:18:37.372 "reset": true, 00:18:37.372 "nvme_admin": false, 00:18:37.372 "nvme_io": false, 00:18:37.372 "nvme_io_md": false, 00:18:37.372 "write_zeroes": true, 00:18:37.372 "zcopy": false, 00:18:37.372 "get_zone_info": false, 00:18:37.372 "zone_management": false, 00:18:37.372 "zone_append": false, 00:18:37.372 "compare": false, 00:18:37.372 "compare_and_write": false, 00:18:37.372 "abort": false, 00:18:37.372 "seek_hole": false, 00:18:37.372 "seek_data": false, 00:18:37.372 "copy": false, 00:18:37.372 "nvme_iov_md": false 00:18:37.372 }, 00:18:37.372 "driver_specific": { 00:18:37.372 "raid": { 00:18:37.372 "uuid": "86d05812-6757-4cd6-ba48-197985f58bd1", 00:18:37.372 "strip_size_kb": 64, 00:18:37.372 "state": "online", 00:18:37.372 "raid_level": "raid5f", 00:18:37.372 "superblock": false, 00:18:37.372 "num_base_bdevs": 4, 00:18:37.372 "num_base_bdevs_discovered": 4, 00:18:37.372 "num_base_bdevs_operational": 4, 00:18:37.372 "base_bdevs_list": [ 00:18:37.372 { 00:18:37.372 "name": "NewBaseBdev", 00:18:37.372 "uuid": "1fbb866d-cb6b-4eee-90f2-6d4ff5b80838", 00:18:37.372 "is_configured": true, 00:18:37.372 "data_offset": 0, 00:18:37.372 "data_size": 65536 00:18:37.372 }, 00:18:37.372 { 00:18:37.372 "name": "BaseBdev2", 00:18:37.372 "uuid": "a9ea5370-27bd-4d99-925c-6b913611d19a", 00:18:37.372 "is_configured": true, 00:18:37.372 "data_offset": 0, 00:18:37.372 "data_size": 65536 00:18:37.372 }, 00:18:37.372 { 00:18:37.372 "name": "BaseBdev3", 00:18:37.372 "uuid": "e28d2b16-6a29-4579-9a36-9b2f21abf5e2", 00:18:37.372 "is_configured": true, 00:18:37.372 "data_offset": 0, 00:18:37.372 "data_size": 65536 00:18:37.372 }, 00:18:37.372 { 00:18:37.372 "name": "BaseBdev4", 00:18:37.372 "uuid": "44f6a829-0342-4f98-8b6e-e46544537874", 00:18:37.372 "is_configured": true, 00:18:37.372 "data_offset": 0, 00:18:37.372 "data_size": 65536 00:18:37.372 } 00:18:37.372 ] 00:18:37.372 } 00:18:37.372 } 00:18:37.372 }' 00:18:37.372 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:37.372 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:37.372 BaseBdev2 00:18:37.372 BaseBdev3 00:18:37.372 BaseBdev4' 00:18:37.372 20:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.630 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.631 [2024-10-17 20:15:23.259287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:37.631 [2024-10-17 20:15:23.259326] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.631 [2024-10-17 20:15:23.259438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.631 [2024-10-17 20:15:23.259828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.631 [2024-10-17 20:15:23.259847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83059 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83059 ']' 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83059 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.631 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83059 00:18:37.889 killing process with pid 83059 00:18:37.889 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.889 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.889 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83059' 00:18:37.889 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 83059 00:18:37.889 [2024-10-17 20:15:23.297860] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.889 20:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 83059 00:18:38.150 [2024-10-17 20:15:23.636025] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:39.084 00:18:39.084 real 0m12.989s 00:18:39.084 user 0m21.600s 00:18:39.084 sys 0m1.908s 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.084 ************************************ 00:18:39.084 END TEST raid5f_state_function_test 00:18:39.084 ************************************ 00:18:39.084 20:15:24 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:18:39.084 20:15:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:39.084 20:15:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:39.084 20:15:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:39.084 ************************************ 00:18:39.084 START TEST raid5f_state_function_test_sb 00:18:39.084 ************************************ 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:39.084 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:39.085 Process raid pid: 83737 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83737 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83737' 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83737 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83737 ']' 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.085 20:15:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.343 [2024-10-17 20:15:24.767922] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:18:39.343 [2024-10-17 20:15:24.768448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.343 [2024-10-17 20:15:24.949933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.601 [2024-10-17 20:15:25.068051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.860 [2024-10-17 20:15:25.258057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.860 [2024-10-17 20:15:25.258112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.118 [2024-10-17 20:15:25.746401] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:40.118 [2024-10-17 20:15:25.746493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:40.118 [2024-10-17 20:15:25.746509] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.118 [2024-10-17 20:15:25.746525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.118 [2024-10-17 20:15:25.746535] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:40.118 [2024-10-17 20:15:25.746549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:40.118 [2024-10-17 20:15:25.746566] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:40.118 [2024-10-17 20:15:25.746580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.118 20:15:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.376 20:15:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.376 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.376 "name": "Existed_Raid", 00:18:40.376 "uuid": "042e1498-32bf-4761-97df-5680b26f5f2e", 00:18:40.376 "strip_size_kb": 64, 00:18:40.376 "state": "configuring", 00:18:40.376 "raid_level": "raid5f", 00:18:40.376 "superblock": true, 00:18:40.376 "num_base_bdevs": 4, 00:18:40.376 "num_base_bdevs_discovered": 0, 00:18:40.376 "num_base_bdevs_operational": 4, 00:18:40.376 "base_bdevs_list": [ 00:18:40.376 { 00:18:40.376 "name": "BaseBdev1", 00:18:40.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.376 "is_configured": false, 00:18:40.376 "data_offset": 0, 00:18:40.376 "data_size": 0 00:18:40.376 }, 00:18:40.376 { 00:18:40.376 "name": "BaseBdev2", 00:18:40.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.376 "is_configured": false, 00:18:40.376 "data_offset": 0, 00:18:40.376 "data_size": 0 00:18:40.376 }, 00:18:40.376 { 00:18:40.376 "name": "BaseBdev3", 00:18:40.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.376 "is_configured": false, 00:18:40.376 "data_offset": 0, 00:18:40.376 "data_size": 0 00:18:40.376 }, 00:18:40.376 { 00:18:40.376 "name": "BaseBdev4", 00:18:40.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.376 "is_configured": false, 00:18:40.376 "data_offset": 0, 00:18:40.376 "data_size": 0 00:18:40.376 } 00:18:40.376 ] 00:18:40.376 }' 00:18:40.376 20:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.376 20:15:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.634 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:40.634 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.634 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.634 [2024-10-17 20:15:26.270417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:40.634 [2024-10-17 20:15:26.270462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:40.634 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.634 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:40.634 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.634 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.634 [2024-10-17 20:15:26.282496] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:40.634 [2024-10-17 20:15:26.282570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:40.634 [2024-10-17 20:15:26.282585] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.634 [2024-10-17 20:15:26.282604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.634 [2024-10-17 20:15:26.282614] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:40.634 [2024-10-17 20:15:26.282627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:40.634 [2024-10-17 20:15:26.282636] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:40.634 [2024-10-17 20:15:26.282650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.892 [2024-10-17 20:15:26.323737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.892 BaseBdev1 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.892 [ 00:18:40.892 { 00:18:40.892 "name": "BaseBdev1", 00:18:40.892 "aliases": [ 00:18:40.892 "80e73431-1627-470f-8dbf-53b5203a69fd" 00:18:40.892 ], 00:18:40.892 "product_name": "Malloc disk", 00:18:40.892 "block_size": 512, 00:18:40.892 "num_blocks": 65536, 00:18:40.892 "uuid": "80e73431-1627-470f-8dbf-53b5203a69fd", 00:18:40.892 "assigned_rate_limits": { 00:18:40.892 "rw_ios_per_sec": 0, 00:18:40.892 "rw_mbytes_per_sec": 0, 00:18:40.892 "r_mbytes_per_sec": 0, 00:18:40.892 "w_mbytes_per_sec": 0 00:18:40.892 }, 00:18:40.892 "claimed": true, 00:18:40.892 "claim_type": "exclusive_write", 00:18:40.892 "zoned": false, 00:18:40.892 "supported_io_types": { 00:18:40.892 "read": true, 00:18:40.892 "write": true, 00:18:40.892 "unmap": true, 00:18:40.892 "flush": true, 00:18:40.892 "reset": true, 00:18:40.892 "nvme_admin": false, 00:18:40.892 "nvme_io": false, 00:18:40.892 "nvme_io_md": false, 00:18:40.892 "write_zeroes": true, 00:18:40.892 "zcopy": true, 00:18:40.892 "get_zone_info": false, 00:18:40.892 "zone_management": false, 00:18:40.892 "zone_append": false, 00:18:40.892 "compare": false, 00:18:40.892 "compare_and_write": false, 00:18:40.892 "abort": true, 00:18:40.892 "seek_hole": false, 00:18:40.892 "seek_data": false, 00:18:40.892 "copy": true, 00:18:40.892 "nvme_iov_md": false 00:18:40.892 }, 00:18:40.892 "memory_domains": [ 00:18:40.892 { 00:18:40.892 "dma_device_id": "system", 00:18:40.892 "dma_device_type": 1 00:18:40.892 }, 00:18:40.892 { 00:18:40.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.892 "dma_device_type": 2 00:18:40.892 } 00:18:40.892 ], 00:18:40.892 "driver_specific": {} 00:18:40.892 } 00:18:40.892 ] 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.892 "name": "Existed_Raid", 00:18:40.892 "uuid": "5257239a-2892-4c5f-bd07-abcccff90115", 00:18:40.892 "strip_size_kb": 64, 00:18:40.892 "state": "configuring", 00:18:40.892 "raid_level": "raid5f", 00:18:40.892 "superblock": true, 00:18:40.892 "num_base_bdevs": 4, 00:18:40.892 "num_base_bdevs_discovered": 1, 00:18:40.892 "num_base_bdevs_operational": 4, 00:18:40.892 "base_bdevs_list": [ 00:18:40.892 { 00:18:40.892 "name": "BaseBdev1", 00:18:40.892 "uuid": "80e73431-1627-470f-8dbf-53b5203a69fd", 00:18:40.892 "is_configured": true, 00:18:40.892 "data_offset": 2048, 00:18:40.892 "data_size": 63488 00:18:40.892 }, 00:18:40.892 { 00:18:40.892 "name": "BaseBdev2", 00:18:40.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.892 "is_configured": false, 00:18:40.892 "data_offset": 0, 00:18:40.892 "data_size": 0 00:18:40.892 }, 00:18:40.892 { 00:18:40.892 "name": "BaseBdev3", 00:18:40.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.892 "is_configured": false, 00:18:40.892 "data_offset": 0, 00:18:40.892 "data_size": 0 00:18:40.892 }, 00:18:40.892 { 00:18:40.892 "name": "BaseBdev4", 00:18:40.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.892 "is_configured": false, 00:18:40.892 "data_offset": 0, 00:18:40.892 "data_size": 0 00:18:40.892 } 00:18:40.892 ] 00:18:40.892 }' 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.892 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.476 [2024-10-17 20:15:26.899953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:41.476 [2024-10-17 20:15:26.900231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.476 [2024-10-17 20:15:26.908046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.476 [2024-10-17 20:15:26.910574] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.476 [2024-10-17 20:15:26.910641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.476 [2024-10-17 20:15:26.910656] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:41.476 [2024-10-17 20:15:26.910673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:41.476 [2024-10-17 20:15:26.910683] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:41.476 [2024-10-17 20:15:26.910696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.476 "name": "Existed_Raid", 00:18:41.476 "uuid": "785f69b9-efc2-477a-bf1f-cbcb9c50967e", 00:18:41.476 "strip_size_kb": 64, 00:18:41.476 "state": "configuring", 00:18:41.476 "raid_level": "raid5f", 00:18:41.476 "superblock": true, 00:18:41.476 "num_base_bdevs": 4, 00:18:41.476 "num_base_bdevs_discovered": 1, 00:18:41.476 "num_base_bdevs_operational": 4, 00:18:41.476 "base_bdevs_list": [ 00:18:41.476 { 00:18:41.476 "name": "BaseBdev1", 00:18:41.476 "uuid": "80e73431-1627-470f-8dbf-53b5203a69fd", 00:18:41.476 "is_configured": true, 00:18:41.476 "data_offset": 2048, 00:18:41.476 "data_size": 63488 00:18:41.476 }, 00:18:41.476 { 00:18:41.476 "name": "BaseBdev2", 00:18:41.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.476 "is_configured": false, 00:18:41.476 "data_offset": 0, 00:18:41.476 "data_size": 0 00:18:41.476 }, 00:18:41.476 { 00:18:41.476 "name": "BaseBdev3", 00:18:41.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.476 "is_configured": false, 00:18:41.476 "data_offset": 0, 00:18:41.476 "data_size": 0 00:18:41.476 }, 00:18:41.476 { 00:18:41.476 "name": "BaseBdev4", 00:18:41.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.476 "is_configured": false, 00:18:41.476 "data_offset": 0, 00:18:41.476 "data_size": 0 00:18:41.476 } 00:18:41.476 ] 00:18:41.476 }' 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.476 20:15:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.050 [2024-10-17 20:15:27.478439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:42.050 BaseBdev2 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.050 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.051 [ 00:18:42.051 { 00:18:42.051 "name": "BaseBdev2", 00:18:42.051 "aliases": [ 00:18:42.051 "11478bcf-7fb1-4295-8a67-b0f25c440103" 00:18:42.051 ], 00:18:42.051 "product_name": "Malloc disk", 00:18:42.051 "block_size": 512, 00:18:42.051 "num_blocks": 65536, 00:18:42.051 "uuid": "11478bcf-7fb1-4295-8a67-b0f25c440103", 00:18:42.051 "assigned_rate_limits": { 00:18:42.051 "rw_ios_per_sec": 0, 00:18:42.051 "rw_mbytes_per_sec": 0, 00:18:42.051 "r_mbytes_per_sec": 0, 00:18:42.051 "w_mbytes_per_sec": 0 00:18:42.051 }, 00:18:42.051 "claimed": true, 00:18:42.051 "claim_type": "exclusive_write", 00:18:42.051 "zoned": false, 00:18:42.051 "supported_io_types": { 00:18:42.051 "read": true, 00:18:42.051 "write": true, 00:18:42.051 "unmap": true, 00:18:42.051 "flush": true, 00:18:42.051 "reset": true, 00:18:42.051 "nvme_admin": false, 00:18:42.051 "nvme_io": false, 00:18:42.051 "nvme_io_md": false, 00:18:42.051 "write_zeroes": true, 00:18:42.051 "zcopy": true, 00:18:42.051 "get_zone_info": false, 00:18:42.051 "zone_management": false, 00:18:42.051 "zone_append": false, 00:18:42.051 "compare": false, 00:18:42.051 "compare_and_write": false, 00:18:42.051 "abort": true, 00:18:42.051 "seek_hole": false, 00:18:42.051 "seek_data": false, 00:18:42.051 "copy": true, 00:18:42.051 "nvme_iov_md": false 00:18:42.051 }, 00:18:42.051 "memory_domains": [ 00:18:42.051 { 00:18:42.051 "dma_device_id": "system", 00:18:42.051 "dma_device_type": 1 00:18:42.051 }, 00:18:42.051 { 00:18:42.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.051 "dma_device_type": 2 00:18:42.051 } 00:18:42.051 ], 00:18:42.051 "driver_specific": {} 00:18:42.051 } 00:18:42.051 ] 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.051 "name": "Existed_Raid", 00:18:42.051 "uuid": "785f69b9-efc2-477a-bf1f-cbcb9c50967e", 00:18:42.051 "strip_size_kb": 64, 00:18:42.051 "state": "configuring", 00:18:42.051 "raid_level": "raid5f", 00:18:42.051 "superblock": true, 00:18:42.051 "num_base_bdevs": 4, 00:18:42.051 "num_base_bdevs_discovered": 2, 00:18:42.051 "num_base_bdevs_operational": 4, 00:18:42.051 "base_bdevs_list": [ 00:18:42.051 { 00:18:42.051 "name": "BaseBdev1", 00:18:42.051 "uuid": "80e73431-1627-470f-8dbf-53b5203a69fd", 00:18:42.051 "is_configured": true, 00:18:42.051 "data_offset": 2048, 00:18:42.051 "data_size": 63488 00:18:42.051 }, 00:18:42.051 { 00:18:42.051 "name": "BaseBdev2", 00:18:42.051 "uuid": "11478bcf-7fb1-4295-8a67-b0f25c440103", 00:18:42.051 "is_configured": true, 00:18:42.051 "data_offset": 2048, 00:18:42.051 "data_size": 63488 00:18:42.051 }, 00:18:42.051 { 00:18:42.051 "name": "BaseBdev3", 00:18:42.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.051 "is_configured": false, 00:18:42.051 "data_offset": 0, 00:18:42.051 "data_size": 0 00:18:42.051 }, 00:18:42.051 { 00:18:42.051 "name": "BaseBdev4", 00:18:42.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.051 "is_configured": false, 00:18:42.051 "data_offset": 0, 00:18:42.051 "data_size": 0 00:18:42.051 } 00:18:42.051 ] 00:18:42.051 }' 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.051 20:15:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.641 [2024-10-17 20:15:28.088918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:42.641 BaseBdev3 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.641 [ 00:18:42.641 { 00:18:42.641 "name": "BaseBdev3", 00:18:42.641 "aliases": [ 00:18:42.641 "cc7fedce-41ea-4ec5-93f1-9101a6f4c3e8" 00:18:42.641 ], 00:18:42.641 "product_name": "Malloc disk", 00:18:42.641 "block_size": 512, 00:18:42.641 "num_blocks": 65536, 00:18:42.641 "uuid": "cc7fedce-41ea-4ec5-93f1-9101a6f4c3e8", 00:18:42.641 "assigned_rate_limits": { 00:18:42.641 "rw_ios_per_sec": 0, 00:18:42.641 "rw_mbytes_per_sec": 0, 00:18:42.641 "r_mbytes_per_sec": 0, 00:18:42.641 "w_mbytes_per_sec": 0 00:18:42.641 }, 00:18:42.641 "claimed": true, 00:18:42.641 "claim_type": "exclusive_write", 00:18:42.641 "zoned": false, 00:18:42.641 "supported_io_types": { 00:18:42.641 "read": true, 00:18:42.641 "write": true, 00:18:42.641 "unmap": true, 00:18:42.641 "flush": true, 00:18:42.641 "reset": true, 00:18:42.641 "nvme_admin": false, 00:18:42.641 "nvme_io": false, 00:18:42.641 "nvme_io_md": false, 00:18:42.641 "write_zeroes": true, 00:18:42.641 "zcopy": true, 00:18:42.641 "get_zone_info": false, 00:18:42.641 "zone_management": false, 00:18:42.641 "zone_append": false, 00:18:42.641 "compare": false, 00:18:42.641 "compare_and_write": false, 00:18:42.641 "abort": true, 00:18:42.641 "seek_hole": false, 00:18:42.641 "seek_data": false, 00:18:42.641 "copy": true, 00:18:42.641 "nvme_iov_md": false 00:18:42.641 }, 00:18:42.641 "memory_domains": [ 00:18:42.641 { 00:18:42.641 "dma_device_id": "system", 00:18:42.641 "dma_device_type": 1 00:18:42.641 }, 00:18:42.641 { 00:18:42.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.641 "dma_device_type": 2 00:18:42.641 } 00:18:42.641 ], 00:18:42.641 "driver_specific": {} 00:18:42.641 } 00:18:42.641 ] 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.641 "name": "Existed_Raid", 00:18:42.641 "uuid": "785f69b9-efc2-477a-bf1f-cbcb9c50967e", 00:18:42.641 "strip_size_kb": 64, 00:18:42.641 "state": "configuring", 00:18:42.641 "raid_level": "raid5f", 00:18:42.641 "superblock": true, 00:18:42.641 "num_base_bdevs": 4, 00:18:42.641 "num_base_bdevs_discovered": 3, 00:18:42.641 "num_base_bdevs_operational": 4, 00:18:42.641 "base_bdevs_list": [ 00:18:42.641 { 00:18:42.641 "name": "BaseBdev1", 00:18:42.641 "uuid": "80e73431-1627-470f-8dbf-53b5203a69fd", 00:18:42.641 "is_configured": true, 00:18:42.641 "data_offset": 2048, 00:18:42.641 "data_size": 63488 00:18:42.641 }, 00:18:42.641 { 00:18:42.641 "name": "BaseBdev2", 00:18:42.641 "uuid": "11478bcf-7fb1-4295-8a67-b0f25c440103", 00:18:42.641 "is_configured": true, 00:18:42.641 "data_offset": 2048, 00:18:42.641 "data_size": 63488 00:18:42.641 }, 00:18:42.641 { 00:18:42.641 "name": "BaseBdev3", 00:18:42.641 "uuid": "cc7fedce-41ea-4ec5-93f1-9101a6f4c3e8", 00:18:42.641 "is_configured": true, 00:18:42.641 "data_offset": 2048, 00:18:42.641 "data_size": 63488 00:18:42.641 }, 00:18:42.641 { 00:18:42.641 "name": "BaseBdev4", 00:18:42.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.641 "is_configured": false, 00:18:42.641 "data_offset": 0, 00:18:42.641 "data_size": 0 00:18:42.641 } 00:18:42.641 ] 00:18:42.641 }' 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.641 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.207 [2024-10-17 20:15:28.682545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:43.207 [2024-10-17 20:15:28.682878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:43.207 [2024-10-17 20:15:28.682896] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:43.207 BaseBdev4 00:18:43.207 [2024-10-17 20:15:28.683292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.207 [2024-10-17 20:15:28.689994] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:43.207 [2024-10-17 20:15:28.690051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:43.207 [2024-10-17 20:15:28.690359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.207 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.207 [ 00:18:43.207 { 00:18:43.207 "name": "BaseBdev4", 00:18:43.208 "aliases": [ 00:18:43.208 "a6140044-3cca-463b-97fc-28540e0bf7d2" 00:18:43.208 ], 00:18:43.208 "product_name": "Malloc disk", 00:18:43.208 "block_size": 512, 00:18:43.208 "num_blocks": 65536, 00:18:43.208 "uuid": "a6140044-3cca-463b-97fc-28540e0bf7d2", 00:18:43.208 "assigned_rate_limits": { 00:18:43.208 "rw_ios_per_sec": 0, 00:18:43.208 "rw_mbytes_per_sec": 0, 00:18:43.208 "r_mbytes_per_sec": 0, 00:18:43.208 "w_mbytes_per_sec": 0 00:18:43.208 }, 00:18:43.208 "claimed": true, 00:18:43.208 "claim_type": "exclusive_write", 00:18:43.208 "zoned": false, 00:18:43.208 "supported_io_types": { 00:18:43.208 "read": true, 00:18:43.208 "write": true, 00:18:43.208 "unmap": true, 00:18:43.208 "flush": true, 00:18:43.208 "reset": true, 00:18:43.208 "nvme_admin": false, 00:18:43.208 "nvme_io": false, 00:18:43.208 "nvme_io_md": false, 00:18:43.208 "write_zeroes": true, 00:18:43.208 "zcopy": true, 00:18:43.208 "get_zone_info": false, 00:18:43.208 "zone_management": false, 00:18:43.208 "zone_append": false, 00:18:43.208 "compare": false, 00:18:43.208 "compare_and_write": false, 00:18:43.208 "abort": true, 00:18:43.208 "seek_hole": false, 00:18:43.208 "seek_data": false, 00:18:43.208 "copy": true, 00:18:43.208 "nvme_iov_md": false 00:18:43.208 }, 00:18:43.208 "memory_domains": [ 00:18:43.208 { 00:18:43.208 "dma_device_id": "system", 00:18:43.208 "dma_device_type": 1 00:18:43.208 }, 00:18:43.208 { 00:18:43.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.208 "dma_device_type": 2 00:18:43.208 } 00:18:43.208 ], 00:18:43.208 "driver_specific": {} 00:18:43.208 } 00:18:43.208 ] 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.208 "name": "Existed_Raid", 00:18:43.208 "uuid": "785f69b9-efc2-477a-bf1f-cbcb9c50967e", 00:18:43.208 "strip_size_kb": 64, 00:18:43.208 "state": "online", 00:18:43.208 "raid_level": "raid5f", 00:18:43.208 "superblock": true, 00:18:43.208 "num_base_bdevs": 4, 00:18:43.208 "num_base_bdevs_discovered": 4, 00:18:43.208 "num_base_bdevs_operational": 4, 00:18:43.208 "base_bdevs_list": [ 00:18:43.208 { 00:18:43.208 "name": "BaseBdev1", 00:18:43.208 "uuid": "80e73431-1627-470f-8dbf-53b5203a69fd", 00:18:43.208 "is_configured": true, 00:18:43.208 "data_offset": 2048, 00:18:43.208 "data_size": 63488 00:18:43.208 }, 00:18:43.208 { 00:18:43.208 "name": "BaseBdev2", 00:18:43.208 "uuid": "11478bcf-7fb1-4295-8a67-b0f25c440103", 00:18:43.208 "is_configured": true, 00:18:43.208 "data_offset": 2048, 00:18:43.208 "data_size": 63488 00:18:43.208 }, 00:18:43.208 { 00:18:43.208 "name": "BaseBdev3", 00:18:43.208 "uuid": "cc7fedce-41ea-4ec5-93f1-9101a6f4c3e8", 00:18:43.208 "is_configured": true, 00:18:43.208 "data_offset": 2048, 00:18:43.208 "data_size": 63488 00:18:43.208 }, 00:18:43.208 { 00:18:43.208 "name": "BaseBdev4", 00:18:43.208 "uuid": "a6140044-3cca-463b-97fc-28540e0bf7d2", 00:18:43.208 "is_configured": true, 00:18:43.208 "data_offset": 2048, 00:18:43.208 "data_size": 63488 00:18:43.208 } 00:18:43.208 ] 00:18:43.208 }' 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.208 20:15:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.775 [2024-10-17 20:15:29.258152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:43.775 "name": "Existed_Raid", 00:18:43.775 "aliases": [ 00:18:43.775 "785f69b9-efc2-477a-bf1f-cbcb9c50967e" 00:18:43.775 ], 00:18:43.775 "product_name": "Raid Volume", 00:18:43.775 "block_size": 512, 00:18:43.775 "num_blocks": 190464, 00:18:43.775 "uuid": "785f69b9-efc2-477a-bf1f-cbcb9c50967e", 00:18:43.775 "assigned_rate_limits": { 00:18:43.775 "rw_ios_per_sec": 0, 00:18:43.775 "rw_mbytes_per_sec": 0, 00:18:43.775 "r_mbytes_per_sec": 0, 00:18:43.775 "w_mbytes_per_sec": 0 00:18:43.775 }, 00:18:43.775 "claimed": false, 00:18:43.775 "zoned": false, 00:18:43.775 "supported_io_types": { 00:18:43.775 "read": true, 00:18:43.775 "write": true, 00:18:43.775 "unmap": false, 00:18:43.775 "flush": false, 00:18:43.775 "reset": true, 00:18:43.775 "nvme_admin": false, 00:18:43.775 "nvme_io": false, 00:18:43.775 "nvme_io_md": false, 00:18:43.775 "write_zeroes": true, 00:18:43.775 "zcopy": false, 00:18:43.775 "get_zone_info": false, 00:18:43.775 "zone_management": false, 00:18:43.775 "zone_append": false, 00:18:43.775 "compare": false, 00:18:43.775 "compare_and_write": false, 00:18:43.775 "abort": false, 00:18:43.775 "seek_hole": false, 00:18:43.775 "seek_data": false, 00:18:43.775 "copy": false, 00:18:43.775 "nvme_iov_md": false 00:18:43.775 }, 00:18:43.775 "driver_specific": { 00:18:43.775 "raid": { 00:18:43.775 "uuid": "785f69b9-efc2-477a-bf1f-cbcb9c50967e", 00:18:43.775 "strip_size_kb": 64, 00:18:43.775 "state": "online", 00:18:43.775 "raid_level": "raid5f", 00:18:43.775 "superblock": true, 00:18:43.775 "num_base_bdevs": 4, 00:18:43.775 "num_base_bdevs_discovered": 4, 00:18:43.775 "num_base_bdevs_operational": 4, 00:18:43.775 "base_bdevs_list": [ 00:18:43.775 { 00:18:43.775 "name": "BaseBdev1", 00:18:43.775 "uuid": "80e73431-1627-470f-8dbf-53b5203a69fd", 00:18:43.775 "is_configured": true, 00:18:43.775 "data_offset": 2048, 00:18:43.775 "data_size": 63488 00:18:43.775 }, 00:18:43.775 { 00:18:43.775 "name": "BaseBdev2", 00:18:43.775 "uuid": "11478bcf-7fb1-4295-8a67-b0f25c440103", 00:18:43.775 "is_configured": true, 00:18:43.775 "data_offset": 2048, 00:18:43.775 "data_size": 63488 00:18:43.775 }, 00:18:43.775 { 00:18:43.775 "name": "BaseBdev3", 00:18:43.775 "uuid": "cc7fedce-41ea-4ec5-93f1-9101a6f4c3e8", 00:18:43.775 "is_configured": true, 00:18:43.775 "data_offset": 2048, 00:18:43.775 "data_size": 63488 00:18:43.775 }, 00:18:43.775 { 00:18:43.775 "name": "BaseBdev4", 00:18:43.775 "uuid": "a6140044-3cca-463b-97fc-28540e0bf7d2", 00:18:43.775 "is_configured": true, 00:18:43.775 "data_offset": 2048, 00:18:43.775 "data_size": 63488 00:18:43.775 } 00:18:43.775 ] 00:18:43.775 } 00:18:43.775 } 00:18:43.775 }' 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:43.775 BaseBdev2 00:18:43.775 BaseBdev3 00:18:43.775 BaseBdev4' 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.775 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.776 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.034 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.034 [2024-10-17 20:15:29.645971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.293 "name": "Existed_Raid", 00:18:44.293 "uuid": "785f69b9-efc2-477a-bf1f-cbcb9c50967e", 00:18:44.293 "strip_size_kb": 64, 00:18:44.293 "state": "online", 00:18:44.293 "raid_level": "raid5f", 00:18:44.293 "superblock": true, 00:18:44.293 "num_base_bdevs": 4, 00:18:44.293 "num_base_bdevs_discovered": 3, 00:18:44.293 "num_base_bdevs_operational": 3, 00:18:44.293 "base_bdevs_list": [ 00:18:44.293 { 00:18:44.293 "name": null, 00:18:44.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.293 "is_configured": false, 00:18:44.293 "data_offset": 0, 00:18:44.293 "data_size": 63488 00:18:44.293 }, 00:18:44.293 { 00:18:44.293 "name": "BaseBdev2", 00:18:44.293 "uuid": "11478bcf-7fb1-4295-8a67-b0f25c440103", 00:18:44.293 "is_configured": true, 00:18:44.293 "data_offset": 2048, 00:18:44.293 "data_size": 63488 00:18:44.293 }, 00:18:44.293 { 00:18:44.293 "name": "BaseBdev3", 00:18:44.293 "uuid": "cc7fedce-41ea-4ec5-93f1-9101a6f4c3e8", 00:18:44.293 "is_configured": true, 00:18:44.293 "data_offset": 2048, 00:18:44.293 "data_size": 63488 00:18:44.293 }, 00:18:44.293 { 00:18:44.293 "name": "BaseBdev4", 00:18:44.293 "uuid": "a6140044-3cca-463b-97fc-28540e0bf7d2", 00:18:44.293 "is_configured": true, 00:18:44.293 "data_offset": 2048, 00:18:44.293 "data_size": 63488 00:18:44.293 } 00:18:44.293 ] 00:18:44.293 }' 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.293 20:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.859 [2024-10-17 20:15:30.314964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:44.859 [2024-10-17 20:15:30.315341] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.859 [2024-10-17 20:15:30.403003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.859 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.859 [2024-10-17 20:15:30.467119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.117 [2024-10-17 20:15:30.610538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:45.117 [2024-10-17 20:15:30.610595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.117 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.376 BaseBdev2 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.376 [ 00:18:45.376 { 00:18:45.376 "name": "BaseBdev2", 00:18:45.376 "aliases": [ 00:18:45.376 "304d65cd-42d3-412a-b74c-ebbefb93da88" 00:18:45.376 ], 00:18:45.376 "product_name": "Malloc disk", 00:18:45.376 "block_size": 512, 00:18:45.376 "num_blocks": 65536, 00:18:45.376 "uuid": "304d65cd-42d3-412a-b74c-ebbefb93da88", 00:18:45.376 "assigned_rate_limits": { 00:18:45.376 "rw_ios_per_sec": 0, 00:18:45.376 "rw_mbytes_per_sec": 0, 00:18:45.376 "r_mbytes_per_sec": 0, 00:18:45.376 "w_mbytes_per_sec": 0 00:18:45.376 }, 00:18:45.376 "claimed": false, 00:18:45.376 "zoned": false, 00:18:45.376 "supported_io_types": { 00:18:45.376 "read": true, 00:18:45.376 "write": true, 00:18:45.376 "unmap": true, 00:18:45.376 "flush": true, 00:18:45.376 "reset": true, 00:18:45.376 "nvme_admin": false, 00:18:45.376 "nvme_io": false, 00:18:45.376 "nvme_io_md": false, 00:18:45.376 "write_zeroes": true, 00:18:45.376 "zcopy": true, 00:18:45.376 "get_zone_info": false, 00:18:45.376 "zone_management": false, 00:18:45.376 "zone_append": false, 00:18:45.376 "compare": false, 00:18:45.376 "compare_and_write": false, 00:18:45.376 "abort": true, 00:18:45.376 "seek_hole": false, 00:18:45.376 "seek_data": false, 00:18:45.376 "copy": true, 00:18:45.376 "nvme_iov_md": false 00:18:45.376 }, 00:18:45.376 "memory_domains": [ 00:18:45.376 { 00:18:45.376 "dma_device_id": "system", 00:18:45.376 "dma_device_type": 1 00:18:45.376 }, 00:18:45.376 { 00:18:45.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.376 "dma_device_type": 2 00:18:45.376 } 00:18:45.376 ], 00:18:45.376 "driver_specific": {} 00:18:45.376 } 00:18:45.376 ] 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.376 BaseBdev3 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.376 [ 00:18:45.376 { 00:18:45.376 "name": "BaseBdev3", 00:18:45.376 "aliases": [ 00:18:45.376 "9d77ecef-4eb2-4c82-8619-9d717adce668" 00:18:45.376 ], 00:18:45.376 "product_name": "Malloc disk", 00:18:45.376 "block_size": 512, 00:18:45.376 "num_blocks": 65536, 00:18:45.376 "uuid": "9d77ecef-4eb2-4c82-8619-9d717adce668", 00:18:45.376 "assigned_rate_limits": { 00:18:45.376 "rw_ios_per_sec": 0, 00:18:45.376 "rw_mbytes_per_sec": 0, 00:18:45.376 "r_mbytes_per_sec": 0, 00:18:45.376 "w_mbytes_per_sec": 0 00:18:45.376 }, 00:18:45.376 "claimed": false, 00:18:45.376 "zoned": false, 00:18:45.376 "supported_io_types": { 00:18:45.376 "read": true, 00:18:45.376 "write": true, 00:18:45.376 "unmap": true, 00:18:45.376 "flush": true, 00:18:45.376 "reset": true, 00:18:45.376 "nvme_admin": false, 00:18:45.376 "nvme_io": false, 00:18:45.376 "nvme_io_md": false, 00:18:45.376 "write_zeroes": true, 00:18:45.376 "zcopy": true, 00:18:45.376 "get_zone_info": false, 00:18:45.376 "zone_management": false, 00:18:45.376 "zone_append": false, 00:18:45.376 "compare": false, 00:18:45.376 "compare_and_write": false, 00:18:45.376 "abort": true, 00:18:45.376 "seek_hole": false, 00:18:45.376 "seek_data": false, 00:18:45.376 "copy": true, 00:18:45.376 "nvme_iov_md": false 00:18:45.376 }, 00:18:45.376 "memory_domains": [ 00:18:45.376 { 00:18:45.376 "dma_device_id": "system", 00:18:45.376 "dma_device_type": 1 00:18:45.376 }, 00:18:45.376 { 00:18:45.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.376 "dma_device_type": 2 00:18:45.376 } 00:18:45.376 ], 00:18:45.376 "driver_specific": {} 00:18:45.376 } 00:18:45.376 ] 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.376 BaseBdev4 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:45.376 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.377 [ 00:18:45.377 { 00:18:45.377 "name": "BaseBdev4", 00:18:45.377 "aliases": [ 00:18:45.377 "ade04367-5bef-4ac5-b071-11b9d5fa2e2f" 00:18:45.377 ], 00:18:45.377 "product_name": "Malloc disk", 00:18:45.377 "block_size": 512, 00:18:45.377 "num_blocks": 65536, 00:18:45.377 "uuid": "ade04367-5bef-4ac5-b071-11b9d5fa2e2f", 00:18:45.377 "assigned_rate_limits": { 00:18:45.377 "rw_ios_per_sec": 0, 00:18:45.377 "rw_mbytes_per_sec": 0, 00:18:45.377 "r_mbytes_per_sec": 0, 00:18:45.377 "w_mbytes_per_sec": 0 00:18:45.377 }, 00:18:45.377 "claimed": false, 00:18:45.377 "zoned": false, 00:18:45.377 "supported_io_types": { 00:18:45.377 "read": true, 00:18:45.377 "write": true, 00:18:45.377 "unmap": true, 00:18:45.377 "flush": true, 00:18:45.377 "reset": true, 00:18:45.377 "nvme_admin": false, 00:18:45.377 "nvme_io": false, 00:18:45.377 "nvme_io_md": false, 00:18:45.377 "write_zeroes": true, 00:18:45.377 "zcopy": true, 00:18:45.377 "get_zone_info": false, 00:18:45.377 "zone_management": false, 00:18:45.377 "zone_append": false, 00:18:45.377 "compare": false, 00:18:45.377 "compare_and_write": false, 00:18:45.377 "abort": true, 00:18:45.377 "seek_hole": false, 00:18:45.377 "seek_data": false, 00:18:45.377 "copy": true, 00:18:45.377 "nvme_iov_md": false 00:18:45.377 }, 00:18:45.377 "memory_domains": [ 00:18:45.377 { 00:18:45.377 "dma_device_id": "system", 00:18:45.377 "dma_device_type": 1 00:18:45.377 }, 00:18:45.377 { 00:18:45.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.377 "dma_device_type": 2 00:18:45.377 } 00:18:45.377 ], 00:18:45.377 "driver_specific": {} 00:18:45.377 } 00:18:45.377 ] 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.377 [2024-10-17 20:15:30.985403] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:45.377 [2024-10-17 20:15:30.985638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:45.377 [2024-10-17 20:15:30.985789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.377 [2024-10-17 20:15:30.988686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:45.377 [2024-10-17 20:15:30.988945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.377 20:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.377 20:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.636 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.636 "name": "Existed_Raid", 00:18:45.636 "uuid": "40860b73-4734-48f6-8cdc-ff2eb469bc82", 00:18:45.636 "strip_size_kb": 64, 00:18:45.636 "state": "configuring", 00:18:45.636 "raid_level": "raid5f", 00:18:45.636 "superblock": true, 00:18:45.636 "num_base_bdevs": 4, 00:18:45.636 "num_base_bdevs_discovered": 3, 00:18:45.636 "num_base_bdevs_operational": 4, 00:18:45.636 "base_bdevs_list": [ 00:18:45.636 { 00:18:45.636 "name": "BaseBdev1", 00:18:45.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.636 "is_configured": false, 00:18:45.636 "data_offset": 0, 00:18:45.636 "data_size": 0 00:18:45.636 }, 00:18:45.636 { 00:18:45.636 "name": "BaseBdev2", 00:18:45.636 "uuid": "304d65cd-42d3-412a-b74c-ebbefb93da88", 00:18:45.636 "is_configured": true, 00:18:45.636 "data_offset": 2048, 00:18:45.636 "data_size": 63488 00:18:45.636 }, 00:18:45.636 { 00:18:45.636 "name": "BaseBdev3", 00:18:45.636 "uuid": "9d77ecef-4eb2-4c82-8619-9d717adce668", 00:18:45.636 "is_configured": true, 00:18:45.636 "data_offset": 2048, 00:18:45.636 "data_size": 63488 00:18:45.636 }, 00:18:45.636 { 00:18:45.636 "name": "BaseBdev4", 00:18:45.636 "uuid": "ade04367-5bef-4ac5-b071-11b9d5fa2e2f", 00:18:45.636 "is_configured": true, 00:18:45.636 "data_offset": 2048, 00:18:45.636 "data_size": 63488 00:18:45.636 } 00:18:45.636 ] 00:18:45.636 }' 00:18:45.636 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.636 20:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.894 [2024-10-17 20:15:31.517641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.894 20:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.153 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.153 "name": "Existed_Raid", 00:18:46.153 "uuid": "40860b73-4734-48f6-8cdc-ff2eb469bc82", 00:18:46.153 "strip_size_kb": 64, 00:18:46.153 "state": "configuring", 00:18:46.153 "raid_level": "raid5f", 00:18:46.153 "superblock": true, 00:18:46.153 "num_base_bdevs": 4, 00:18:46.153 "num_base_bdevs_discovered": 2, 00:18:46.153 "num_base_bdevs_operational": 4, 00:18:46.153 "base_bdevs_list": [ 00:18:46.153 { 00:18:46.153 "name": "BaseBdev1", 00:18:46.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.153 "is_configured": false, 00:18:46.153 "data_offset": 0, 00:18:46.153 "data_size": 0 00:18:46.153 }, 00:18:46.153 { 00:18:46.153 "name": null, 00:18:46.153 "uuid": "304d65cd-42d3-412a-b74c-ebbefb93da88", 00:18:46.153 "is_configured": false, 00:18:46.153 "data_offset": 0, 00:18:46.153 "data_size": 63488 00:18:46.153 }, 00:18:46.153 { 00:18:46.153 "name": "BaseBdev3", 00:18:46.153 "uuid": "9d77ecef-4eb2-4c82-8619-9d717adce668", 00:18:46.153 "is_configured": true, 00:18:46.153 "data_offset": 2048, 00:18:46.153 "data_size": 63488 00:18:46.153 }, 00:18:46.153 { 00:18:46.153 "name": "BaseBdev4", 00:18:46.153 "uuid": "ade04367-5bef-4ac5-b071-11b9d5fa2e2f", 00:18:46.153 "is_configured": true, 00:18:46.153 "data_offset": 2048, 00:18:46.153 "data_size": 63488 00:18:46.153 } 00:18:46.153 ] 00:18:46.153 }' 00:18:46.153 20:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.153 20:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.412 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:46.412 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.412 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.412 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.412 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.412 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:46.412 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:46.412 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.412 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.670 [2024-10-17 20:15:32.093272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:46.670 BaseBdev1 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.670 [ 00:18:46.670 { 00:18:46.670 "name": "BaseBdev1", 00:18:46.670 "aliases": [ 00:18:46.670 "b645bcdb-0778-477c-bce3-f7e898b1531f" 00:18:46.670 ], 00:18:46.670 "product_name": "Malloc disk", 00:18:46.670 "block_size": 512, 00:18:46.670 "num_blocks": 65536, 00:18:46.670 "uuid": "b645bcdb-0778-477c-bce3-f7e898b1531f", 00:18:46.670 "assigned_rate_limits": { 00:18:46.670 "rw_ios_per_sec": 0, 00:18:46.670 "rw_mbytes_per_sec": 0, 00:18:46.670 "r_mbytes_per_sec": 0, 00:18:46.670 "w_mbytes_per_sec": 0 00:18:46.670 }, 00:18:46.670 "claimed": true, 00:18:46.670 "claim_type": "exclusive_write", 00:18:46.670 "zoned": false, 00:18:46.670 "supported_io_types": { 00:18:46.670 "read": true, 00:18:46.670 "write": true, 00:18:46.670 "unmap": true, 00:18:46.670 "flush": true, 00:18:46.670 "reset": true, 00:18:46.670 "nvme_admin": false, 00:18:46.670 "nvme_io": false, 00:18:46.670 "nvme_io_md": false, 00:18:46.670 "write_zeroes": true, 00:18:46.670 "zcopy": true, 00:18:46.670 "get_zone_info": false, 00:18:46.670 "zone_management": false, 00:18:46.670 "zone_append": false, 00:18:46.670 "compare": false, 00:18:46.670 "compare_and_write": false, 00:18:46.670 "abort": true, 00:18:46.670 "seek_hole": false, 00:18:46.670 "seek_data": false, 00:18:46.670 "copy": true, 00:18:46.670 "nvme_iov_md": false 00:18:46.670 }, 00:18:46.670 "memory_domains": [ 00:18:46.670 { 00:18:46.670 "dma_device_id": "system", 00:18:46.670 "dma_device_type": 1 00:18:46.670 }, 00:18:46.670 { 00:18:46.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.670 "dma_device_type": 2 00:18:46.670 } 00:18:46.670 ], 00:18:46.670 "driver_specific": {} 00:18:46.670 } 00:18:46.670 ] 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.670 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.671 "name": "Existed_Raid", 00:18:46.671 "uuid": "40860b73-4734-48f6-8cdc-ff2eb469bc82", 00:18:46.671 "strip_size_kb": 64, 00:18:46.671 "state": "configuring", 00:18:46.671 "raid_level": "raid5f", 00:18:46.671 "superblock": true, 00:18:46.671 "num_base_bdevs": 4, 00:18:46.671 "num_base_bdevs_discovered": 3, 00:18:46.671 "num_base_bdevs_operational": 4, 00:18:46.671 "base_bdevs_list": [ 00:18:46.671 { 00:18:46.671 "name": "BaseBdev1", 00:18:46.671 "uuid": "b645bcdb-0778-477c-bce3-f7e898b1531f", 00:18:46.671 "is_configured": true, 00:18:46.671 "data_offset": 2048, 00:18:46.671 "data_size": 63488 00:18:46.671 }, 00:18:46.671 { 00:18:46.671 "name": null, 00:18:46.671 "uuid": "304d65cd-42d3-412a-b74c-ebbefb93da88", 00:18:46.671 "is_configured": false, 00:18:46.671 "data_offset": 0, 00:18:46.671 "data_size": 63488 00:18:46.671 }, 00:18:46.671 { 00:18:46.671 "name": "BaseBdev3", 00:18:46.671 "uuid": "9d77ecef-4eb2-4c82-8619-9d717adce668", 00:18:46.671 "is_configured": true, 00:18:46.671 "data_offset": 2048, 00:18:46.671 "data_size": 63488 00:18:46.671 }, 00:18:46.671 { 00:18:46.671 "name": "BaseBdev4", 00:18:46.671 "uuid": "ade04367-5bef-4ac5-b071-11b9d5fa2e2f", 00:18:46.671 "is_configured": true, 00:18:46.671 "data_offset": 2048, 00:18:46.671 "data_size": 63488 00:18:46.671 } 00:18:46.671 ] 00:18:46.671 }' 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.671 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.248 [2024-10-17 20:15:32.689584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.248 "name": "Existed_Raid", 00:18:47.248 "uuid": "40860b73-4734-48f6-8cdc-ff2eb469bc82", 00:18:47.248 "strip_size_kb": 64, 00:18:47.248 "state": "configuring", 00:18:47.248 "raid_level": "raid5f", 00:18:47.248 "superblock": true, 00:18:47.248 "num_base_bdevs": 4, 00:18:47.248 "num_base_bdevs_discovered": 2, 00:18:47.248 "num_base_bdevs_operational": 4, 00:18:47.248 "base_bdevs_list": [ 00:18:47.248 { 00:18:47.248 "name": "BaseBdev1", 00:18:47.248 "uuid": "b645bcdb-0778-477c-bce3-f7e898b1531f", 00:18:47.248 "is_configured": true, 00:18:47.248 "data_offset": 2048, 00:18:47.248 "data_size": 63488 00:18:47.248 }, 00:18:47.248 { 00:18:47.248 "name": null, 00:18:47.248 "uuid": "304d65cd-42d3-412a-b74c-ebbefb93da88", 00:18:47.248 "is_configured": false, 00:18:47.248 "data_offset": 0, 00:18:47.248 "data_size": 63488 00:18:47.248 }, 00:18:47.248 { 00:18:47.248 "name": null, 00:18:47.248 "uuid": "9d77ecef-4eb2-4c82-8619-9d717adce668", 00:18:47.248 "is_configured": false, 00:18:47.248 "data_offset": 0, 00:18:47.248 "data_size": 63488 00:18:47.248 }, 00:18:47.248 { 00:18:47.248 "name": "BaseBdev4", 00:18:47.248 "uuid": "ade04367-5bef-4ac5-b071-11b9d5fa2e2f", 00:18:47.248 "is_configured": true, 00:18:47.248 "data_offset": 2048, 00:18:47.248 "data_size": 63488 00:18:47.248 } 00:18:47.248 ] 00:18:47.248 }' 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.248 20:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.815 [2024-10-17 20:15:33.269777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.815 "name": "Existed_Raid", 00:18:47.815 "uuid": "40860b73-4734-48f6-8cdc-ff2eb469bc82", 00:18:47.815 "strip_size_kb": 64, 00:18:47.815 "state": "configuring", 00:18:47.815 "raid_level": "raid5f", 00:18:47.815 "superblock": true, 00:18:47.815 "num_base_bdevs": 4, 00:18:47.815 "num_base_bdevs_discovered": 3, 00:18:47.815 "num_base_bdevs_operational": 4, 00:18:47.815 "base_bdevs_list": [ 00:18:47.815 { 00:18:47.815 "name": "BaseBdev1", 00:18:47.815 "uuid": "b645bcdb-0778-477c-bce3-f7e898b1531f", 00:18:47.815 "is_configured": true, 00:18:47.815 "data_offset": 2048, 00:18:47.815 "data_size": 63488 00:18:47.815 }, 00:18:47.815 { 00:18:47.815 "name": null, 00:18:47.815 "uuid": "304d65cd-42d3-412a-b74c-ebbefb93da88", 00:18:47.815 "is_configured": false, 00:18:47.815 "data_offset": 0, 00:18:47.815 "data_size": 63488 00:18:47.815 }, 00:18:47.815 { 00:18:47.815 "name": "BaseBdev3", 00:18:47.815 "uuid": "9d77ecef-4eb2-4c82-8619-9d717adce668", 00:18:47.815 "is_configured": true, 00:18:47.815 "data_offset": 2048, 00:18:47.815 "data_size": 63488 00:18:47.815 }, 00:18:47.815 { 00:18:47.815 "name": "BaseBdev4", 00:18:47.815 "uuid": "ade04367-5bef-4ac5-b071-11b9d5fa2e2f", 00:18:47.815 "is_configured": true, 00:18:47.815 "data_offset": 2048, 00:18:47.815 "data_size": 63488 00:18:47.815 } 00:18:47.815 ] 00:18:47.815 }' 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.815 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.382 [2024-10-17 20:15:33.846040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.382 "name": "Existed_Raid", 00:18:48.382 "uuid": "40860b73-4734-48f6-8cdc-ff2eb469bc82", 00:18:48.382 "strip_size_kb": 64, 00:18:48.382 "state": "configuring", 00:18:48.382 "raid_level": "raid5f", 00:18:48.382 "superblock": true, 00:18:48.382 "num_base_bdevs": 4, 00:18:48.382 "num_base_bdevs_discovered": 2, 00:18:48.382 "num_base_bdevs_operational": 4, 00:18:48.382 "base_bdevs_list": [ 00:18:48.382 { 00:18:48.382 "name": null, 00:18:48.382 "uuid": "b645bcdb-0778-477c-bce3-f7e898b1531f", 00:18:48.382 "is_configured": false, 00:18:48.382 "data_offset": 0, 00:18:48.382 "data_size": 63488 00:18:48.382 }, 00:18:48.382 { 00:18:48.382 "name": null, 00:18:48.382 "uuid": "304d65cd-42d3-412a-b74c-ebbefb93da88", 00:18:48.382 "is_configured": false, 00:18:48.382 "data_offset": 0, 00:18:48.382 "data_size": 63488 00:18:48.382 }, 00:18:48.382 { 00:18:48.382 "name": "BaseBdev3", 00:18:48.382 "uuid": "9d77ecef-4eb2-4c82-8619-9d717adce668", 00:18:48.382 "is_configured": true, 00:18:48.382 "data_offset": 2048, 00:18:48.382 "data_size": 63488 00:18:48.382 }, 00:18:48.382 { 00:18:48.382 "name": "BaseBdev4", 00:18:48.382 "uuid": "ade04367-5bef-4ac5-b071-11b9d5fa2e2f", 00:18:48.382 "is_configured": true, 00:18:48.382 "data_offset": 2048, 00:18:48.382 "data_size": 63488 00:18:48.382 } 00:18:48.382 ] 00:18:48.382 }' 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.382 20:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.948 [2024-10-17 20:15:34.510938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.948 "name": "Existed_Raid", 00:18:48.948 "uuid": "40860b73-4734-48f6-8cdc-ff2eb469bc82", 00:18:48.948 "strip_size_kb": 64, 00:18:48.948 "state": "configuring", 00:18:48.948 "raid_level": "raid5f", 00:18:48.948 "superblock": true, 00:18:48.948 "num_base_bdevs": 4, 00:18:48.948 "num_base_bdevs_discovered": 3, 00:18:48.948 "num_base_bdevs_operational": 4, 00:18:48.948 "base_bdevs_list": [ 00:18:48.948 { 00:18:48.948 "name": null, 00:18:48.948 "uuid": "b645bcdb-0778-477c-bce3-f7e898b1531f", 00:18:48.948 "is_configured": false, 00:18:48.948 "data_offset": 0, 00:18:48.948 "data_size": 63488 00:18:48.948 }, 00:18:48.948 { 00:18:48.948 "name": "BaseBdev2", 00:18:48.948 "uuid": "304d65cd-42d3-412a-b74c-ebbefb93da88", 00:18:48.948 "is_configured": true, 00:18:48.948 "data_offset": 2048, 00:18:48.948 "data_size": 63488 00:18:48.948 }, 00:18:48.948 { 00:18:48.948 "name": "BaseBdev3", 00:18:48.948 "uuid": "9d77ecef-4eb2-4c82-8619-9d717adce668", 00:18:48.948 "is_configured": true, 00:18:48.948 "data_offset": 2048, 00:18:48.948 "data_size": 63488 00:18:48.948 }, 00:18:48.948 { 00:18:48.948 "name": "BaseBdev4", 00:18:48.948 "uuid": "ade04367-5bef-4ac5-b071-11b9d5fa2e2f", 00:18:48.948 "is_configured": true, 00:18:48.948 "data_offset": 2048, 00:18:48.948 "data_size": 63488 00:18:48.948 } 00:18:48.948 ] 00:18:48.948 }' 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.948 20:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b645bcdb-0778-477c-bce3-f7e898b1531f 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.515 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.515 [2024-10-17 20:15:35.166150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:49.515 [2024-10-17 20:15:35.166475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:49.515 [2024-10-17 20:15:35.166492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:49.515 [2024-10-17 20:15:35.166787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:49.515 NewBaseBdev 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.774 [2024-10-17 20:15:35.173455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:49.774 [2024-10-17 20:15:35.173502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:49.774 [2024-10-17 20:15:35.173789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.774 [ 00:18:49.774 { 00:18:49.774 "name": "NewBaseBdev", 00:18:49.774 "aliases": [ 00:18:49.774 "b645bcdb-0778-477c-bce3-f7e898b1531f" 00:18:49.774 ], 00:18:49.774 "product_name": "Malloc disk", 00:18:49.774 "block_size": 512, 00:18:49.774 "num_blocks": 65536, 00:18:49.774 "uuid": "b645bcdb-0778-477c-bce3-f7e898b1531f", 00:18:49.774 "assigned_rate_limits": { 00:18:49.774 "rw_ios_per_sec": 0, 00:18:49.774 "rw_mbytes_per_sec": 0, 00:18:49.774 "r_mbytes_per_sec": 0, 00:18:49.774 "w_mbytes_per_sec": 0 00:18:49.774 }, 00:18:49.774 "claimed": true, 00:18:49.774 "claim_type": "exclusive_write", 00:18:49.774 "zoned": false, 00:18:49.774 "supported_io_types": { 00:18:49.774 "read": true, 00:18:49.774 "write": true, 00:18:49.774 "unmap": true, 00:18:49.774 "flush": true, 00:18:49.774 "reset": true, 00:18:49.774 "nvme_admin": false, 00:18:49.774 "nvme_io": false, 00:18:49.774 "nvme_io_md": false, 00:18:49.774 "write_zeroes": true, 00:18:49.774 "zcopy": true, 00:18:49.774 "get_zone_info": false, 00:18:49.774 "zone_management": false, 00:18:49.774 "zone_append": false, 00:18:49.774 "compare": false, 00:18:49.774 "compare_and_write": false, 00:18:49.774 "abort": true, 00:18:49.774 "seek_hole": false, 00:18:49.774 "seek_data": false, 00:18:49.774 "copy": true, 00:18:49.774 "nvme_iov_md": false 00:18:49.774 }, 00:18:49.774 "memory_domains": [ 00:18:49.774 { 00:18:49.774 "dma_device_id": "system", 00:18:49.774 "dma_device_type": 1 00:18:49.774 }, 00:18:49.774 { 00:18:49.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.774 "dma_device_type": 2 00:18:49.774 } 00:18:49.774 ], 00:18:49.774 "driver_specific": {} 00:18:49.774 } 00:18:49.774 ] 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.774 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.775 "name": "Existed_Raid", 00:18:49.775 "uuid": "40860b73-4734-48f6-8cdc-ff2eb469bc82", 00:18:49.775 "strip_size_kb": 64, 00:18:49.775 "state": "online", 00:18:49.775 "raid_level": "raid5f", 00:18:49.775 "superblock": true, 00:18:49.775 "num_base_bdevs": 4, 00:18:49.775 "num_base_bdevs_discovered": 4, 00:18:49.775 "num_base_bdevs_operational": 4, 00:18:49.775 "base_bdevs_list": [ 00:18:49.775 { 00:18:49.775 "name": "NewBaseBdev", 00:18:49.775 "uuid": "b645bcdb-0778-477c-bce3-f7e898b1531f", 00:18:49.775 "is_configured": true, 00:18:49.775 "data_offset": 2048, 00:18:49.775 "data_size": 63488 00:18:49.775 }, 00:18:49.775 { 00:18:49.775 "name": "BaseBdev2", 00:18:49.775 "uuid": "304d65cd-42d3-412a-b74c-ebbefb93da88", 00:18:49.775 "is_configured": true, 00:18:49.775 "data_offset": 2048, 00:18:49.775 "data_size": 63488 00:18:49.775 }, 00:18:49.775 { 00:18:49.775 "name": "BaseBdev3", 00:18:49.775 "uuid": "9d77ecef-4eb2-4c82-8619-9d717adce668", 00:18:49.775 "is_configured": true, 00:18:49.775 "data_offset": 2048, 00:18:49.775 "data_size": 63488 00:18:49.775 }, 00:18:49.775 { 00:18:49.775 "name": "BaseBdev4", 00:18:49.775 "uuid": "ade04367-5bef-4ac5-b071-11b9d5fa2e2f", 00:18:49.775 "is_configured": true, 00:18:49.775 "data_offset": 2048, 00:18:49.775 "data_size": 63488 00:18:49.775 } 00:18:49.775 ] 00:18:49.775 }' 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.775 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:50.342 [2024-10-17 20:15:35.725554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:50.342 "name": "Existed_Raid", 00:18:50.342 "aliases": [ 00:18:50.342 "40860b73-4734-48f6-8cdc-ff2eb469bc82" 00:18:50.342 ], 00:18:50.342 "product_name": "Raid Volume", 00:18:50.342 "block_size": 512, 00:18:50.342 "num_blocks": 190464, 00:18:50.342 "uuid": "40860b73-4734-48f6-8cdc-ff2eb469bc82", 00:18:50.342 "assigned_rate_limits": { 00:18:50.342 "rw_ios_per_sec": 0, 00:18:50.342 "rw_mbytes_per_sec": 0, 00:18:50.342 "r_mbytes_per_sec": 0, 00:18:50.342 "w_mbytes_per_sec": 0 00:18:50.342 }, 00:18:50.342 "claimed": false, 00:18:50.342 "zoned": false, 00:18:50.342 "supported_io_types": { 00:18:50.342 "read": true, 00:18:50.342 "write": true, 00:18:50.342 "unmap": false, 00:18:50.342 "flush": false, 00:18:50.342 "reset": true, 00:18:50.342 "nvme_admin": false, 00:18:50.342 "nvme_io": false, 00:18:50.342 "nvme_io_md": false, 00:18:50.342 "write_zeroes": true, 00:18:50.342 "zcopy": false, 00:18:50.342 "get_zone_info": false, 00:18:50.342 "zone_management": false, 00:18:50.342 "zone_append": false, 00:18:50.342 "compare": false, 00:18:50.342 "compare_and_write": false, 00:18:50.342 "abort": false, 00:18:50.342 "seek_hole": false, 00:18:50.342 "seek_data": false, 00:18:50.342 "copy": false, 00:18:50.342 "nvme_iov_md": false 00:18:50.342 }, 00:18:50.342 "driver_specific": { 00:18:50.342 "raid": { 00:18:50.342 "uuid": "40860b73-4734-48f6-8cdc-ff2eb469bc82", 00:18:50.342 "strip_size_kb": 64, 00:18:50.342 "state": "online", 00:18:50.342 "raid_level": "raid5f", 00:18:50.342 "superblock": true, 00:18:50.342 "num_base_bdevs": 4, 00:18:50.342 "num_base_bdevs_discovered": 4, 00:18:50.342 "num_base_bdevs_operational": 4, 00:18:50.342 "base_bdevs_list": [ 00:18:50.342 { 00:18:50.342 "name": "NewBaseBdev", 00:18:50.342 "uuid": "b645bcdb-0778-477c-bce3-f7e898b1531f", 00:18:50.342 "is_configured": true, 00:18:50.342 "data_offset": 2048, 00:18:50.342 "data_size": 63488 00:18:50.342 }, 00:18:50.342 { 00:18:50.342 "name": "BaseBdev2", 00:18:50.342 "uuid": "304d65cd-42d3-412a-b74c-ebbefb93da88", 00:18:50.342 "is_configured": true, 00:18:50.342 "data_offset": 2048, 00:18:50.342 "data_size": 63488 00:18:50.342 }, 00:18:50.342 { 00:18:50.342 "name": "BaseBdev3", 00:18:50.342 "uuid": "9d77ecef-4eb2-4c82-8619-9d717adce668", 00:18:50.342 "is_configured": true, 00:18:50.342 "data_offset": 2048, 00:18:50.342 "data_size": 63488 00:18:50.342 }, 00:18:50.342 { 00:18:50.342 "name": "BaseBdev4", 00:18:50.342 "uuid": "ade04367-5bef-4ac5-b071-11b9d5fa2e2f", 00:18:50.342 "is_configured": true, 00:18:50.342 "data_offset": 2048, 00:18:50.342 "data_size": 63488 00:18:50.342 } 00:18:50.342 ] 00:18:50.342 } 00:18:50.342 } 00:18:50.342 }' 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:50.342 BaseBdev2 00:18:50.342 BaseBdev3 00:18:50.342 BaseBdev4' 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.342 20:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.602 [2024-10-17 20:15:36.089368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:50.602 [2024-10-17 20:15:36.089449] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:50.602 [2024-10-17 20:15:36.089540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:50.602 [2024-10-17 20:15:36.089871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:50.602 [2024-10-17 20:15:36.089888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83737 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83737 ']' 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83737 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83737 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:50.602 killing process with pid 83737 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83737' 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83737 00:18:50.602 [2024-10-17 20:15:36.125430] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:50.602 20:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83737 00:18:50.861 [2024-10-17 20:15:36.465657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:51.796 ************************************ 00:18:51.796 END TEST raid5f_state_function_test_sb 00:18:51.796 ************************************ 00:18:51.796 20:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:51.796 00:18:51.796 real 0m12.754s 00:18:51.796 user 0m21.221s 00:18:51.796 sys 0m1.832s 00:18:51.796 20:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:51.796 20:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.054 20:15:37 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:18:52.054 20:15:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:52.054 20:15:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:52.054 20:15:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.054 ************************************ 00:18:52.054 START TEST raid5f_superblock_test 00:18:52.054 ************************************ 00:18:52.054 20:15:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84418 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84418 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84418 ']' 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:52.055 20:15:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.055 [2024-10-17 20:15:37.577176] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:18:52.055 [2024-10-17 20:15:37.577631] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84418 ] 00:18:52.313 [2024-10-17 20:15:37.750426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.313 [2024-10-17 20:15:37.865818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.571 [2024-10-17 20:15:38.055917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:52.571 [2024-10-17 20:15:38.056257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:52.829 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:52.829 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:52.829 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:52.829 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:52.829 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:52.829 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:52.829 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:52.829 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:52.829 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:52.829 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:52.829 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:52.829 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.829 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.089 malloc1 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.089 [2024-10-17 20:15:38.523059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:53.089 [2024-10-17 20:15:38.523309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.089 [2024-10-17 20:15:38.523376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:53.089 [2024-10-17 20:15:38.523399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.089 [2024-10-17 20:15:38.526390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.089 [2024-10-17 20:15:38.526603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:53.089 pt1 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.089 malloc2 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.089 [2024-10-17 20:15:38.581130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:53.089 [2024-10-17 20:15:38.581362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.089 [2024-10-17 20:15:38.581404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:53.089 [2024-10-17 20:15:38.581433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.089 [2024-10-17 20:15:38.584356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.089 [2024-10-17 20:15:38.584511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:53.089 pt2 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.089 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.090 malloc3 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.090 [2024-10-17 20:15:38.650941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:53.090 [2024-10-17 20:15:38.651061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.090 [2024-10-17 20:15:38.651094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:53.090 [2024-10-17 20:15:38.651109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.090 [2024-10-17 20:15:38.654025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.090 [2024-10-17 20:15:38.654094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:53.090 pt3 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.090 malloc4 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.090 [2024-10-17 20:15:38.708718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:53.090 [2024-10-17 20:15:38.708776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.090 [2024-10-17 20:15:38.708802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:53.090 [2024-10-17 20:15:38.708816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.090 [2024-10-17 20:15:38.711704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.090 [2024-10-17 20:15:38.711764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:53.090 pt4 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.090 [2024-10-17 20:15:38.720824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:53.090 [2024-10-17 20:15:38.723333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:53.090 [2024-10-17 20:15:38.723430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:53.090 [2024-10-17 20:15:38.723525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:53.090 [2024-10-17 20:15:38.723767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:53.090 [2024-10-17 20:15:38.723787] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:53.090 [2024-10-17 20:15:38.724169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:53.090 [2024-10-17 20:15:38.731380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:53.090 [2024-10-17 20:15:38.731415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:53.090 [2024-10-17 20:15:38.731647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.090 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.348 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.349 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.349 "name": "raid_bdev1", 00:18:53.349 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:53.349 "strip_size_kb": 64, 00:18:53.349 "state": "online", 00:18:53.349 "raid_level": "raid5f", 00:18:53.349 "superblock": true, 00:18:53.349 "num_base_bdevs": 4, 00:18:53.349 "num_base_bdevs_discovered": 4, 00:18:53.349 "num_base_bdevs_operational": 4, 00:18:53.349 "base_bdevs_list": [ 00:18:53.349 { 00:18:53.349 "name": "pt1", 00:18:53.349 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:53.349 "is_configured": true, 00:18:53.349 "data_offset": 2048, 00:18:53.349 "data_size": 63488 00:18:53.349 }, 00:18:53.349 { 00:18:53.349 "name": "pt2", 00:18:53.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:53.349 "is_configured": true, 00:18:53.349 "data_offset": 2048, 00:18:53.349 "data_size": 63488 00:18:53.349 }, 00:18:53.349 { 00:18:53.349 "name": "pt3", 00:18:53.349 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:53.349 "is_configured": true, 00:18:53.349 "data_offset": 2048, 00:18:53.349 "data_size": 63488 00:18:53.349 }, 00:18:53.349 { 00:18:53.349 "name": "pt4", 00:18:53.349 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:53.349 "is_configured": true, 00:18:53.349 "data_offset": 2048, 00:18:53.349 "data_size": 63488 00:18:53.349 } 00:18:53.349 ] 00:18:53.349 }' 00:18:53.349 20:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.349 20:15:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.915 [2024-10-17 20:15:39.283776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:53.915 "name": "raid_bdev1", 00:18:53.915 "aliases": [ 00:18:53.915 "c368ccc0-9d42-4959-a964-f785a4809ae8" 00:18:53.915 ], 00:18:53.915 "product_name": "Raid Volume", 00:18:53.915 "block_size": 512, 00:18:53.915 "num_blocks": 190464, 00:18:53.915 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:53.915 "assigned_rate_limits": { 00:18:53.915 "rw_ios_per_sec": 0, 00:18:53.915 "rw_mbytes_per_sec": 0, 00:18:53.915 "r_mbytes_per_sec": 0, 00:18:53.915 "w_mbytes_per_sec": 0 00:18:53.915 }, 00:18:53.915 "claimed": false, 00:18:53.915 "zoned": false, 00:18:53.915 "supported_io_types": { 00:18:53.915 "read": true, 00:18:53.915 "write": true, 00:18:53.915 "unmap": false, 00:18:53.915 "flush": false, 00:18:53.915 "reset": true, 00:18:53.915 "nvme_admin": false, 00:18:53.915 "nvme_io": false, 00:18:53.915 "nvme_io_md": false, 00:18:53.915 "write_zeroes": true, 00:18:53.915 "zcopy": false, 00:18:53.915 "get_zone_info": false, 00:18:53.915 "zone_management": false, 00:18:53.915 "zone_append": false, 00:18:53.915 "compare": false, 00:18:53.915 "compare_and_write": false, 00:18:53.915 "abort": false, 00:18:53.915 "seek_hole": false, 00:18:53.915 "seek_data": false, 00:18:53.915 "copy": false, 00:18:53.915 "nvme_iov_md": false 00:18:53.915 }, 00:18:53.915 "driver_specific": { 00:18:53.915 "raid": { 00:18:53.915 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:53.915 "strip_size_kb": 64, 00:18:53.915 "state": "online", 00:18:53.915 "raid_level": "raid5f", 00:18:53.915 "superblock": true, 00:18:53.915 "num_base_bdevs": 4, 00:18:53.915 "num_base_bdevs_discovered": 4, 00:18:53.915 "num_base_bdevs_operational": 4, 00:18:53.915 "base_bdevs_list": [ 00:18:53.915 { 00:18:53.915 "name": "pt1", 00:18:53.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:53.915 "is_configured": true, 00:18:53.915 "data_offset": 2048, 00:18:53.915 "data_size": 63488 00:18:53.915 }, 00:18:53.915 { 00:18:53.915 "name": "pt2", 00:18:53.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:53.915 "is_configured": true, 00:18:53.915 "data_offset": 2048, 00:18:53.915 "data_size": 63488 00:18:53.915 }, 00:18:53.915 { 00:18:53.915 "name": "pt3", 00:18:53.915 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:53.915 "is_configured": true, 00:18:53.915 "data_offset": 2048, 00:18:53.915 "data_size": 63488 00:18:53.915 }, 00:18:53.915 { 00:18:53.915 "name": "pt4", 00:18:53.915 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:53.915 "is_configured": true, 00:18:53.915 "data_offset": 2048, 00:18:53.915 "data_size": 63488 00:18:53.915 } 00:18:53.915 ] 00:18:53.915 } 00:18:53.915 } 00:18:53.915 }' 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:53.915 pt2 00:18:53.915 pt3 00:18:53.915 pt4' 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.915 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.174 [2024-10-17 20:15:39.655785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c368ccc0-9d42-4959-a964-f785a4809ae8 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c368ccc0-9d42-4959-a964-f785a4809ae8 ']' 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.174 [2024-10-17 20:15:39.707616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:54.174 [2024-10-17 20:15:39.707793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:54.174 [2024-10-17 20:15:39.708018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.174 [2024-10-17 20:15:39.708270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.174 [2024-10-17 20:15:39.708428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:54.174 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.175 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.175 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.175 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:54.175 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:54.175 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.175 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.433 [2024-10-17 20:15:39.859684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:54.433 [2024-10-17 20:15:39.862282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:54.433 [2024-10-17 20:15:39.862402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:54.433 [2024-10-17 20:15:39.862460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:54.433 [2024-10-17 20:15:39.862531] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:54.433 [2024-10-17 20:15:39.862627] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:54.433 [2024-10-17 20:15:39.862660] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:54.433 [2024-10-17 20:15:39.862691] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:54.433 [2024-10-17 20:15:39.862713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:54.433 [2024-10-17 20:15:39.862728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:54.433 request: 00:18:54.433 { 00:18:54.433 "name": "raid_bdev1", 00:18:54.433 "raid_level": "raid5f", 00:18:54.433 "base_bdevs": [ 00:18:54.433 "malloc1", 00:18:54.433 "malloc2", 00:18:54.433 "malloc3", 00:18:54.433 "malloc4" 00:18:54.433 ], 00:18:54.433 "strip_size_kb": 64, 00:18:54.433 "superblock": false, 00:18:54.433 "method": "bdev_raid_create", 00:18:54.433 "req_id": 1 00:18:54.433 } 00:18:54.433 Got JSON-RPC error response 00:18:54.433 response: 00:18:54.433 { 00:18:54.433 "code": -17, 00:18:54.433 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:54.433 } 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.433 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.434 [2024-10-17 20:15:39.927637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:54.434 [2024-10-17 20:15:39.927700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.434 [2024-10-17 20:15:39.927731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:54.434 [2024-10-17 20:15:39.927753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.434 [2024-10-17 20:15:39.930902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.434 [2024-10-17 20:15:39.930951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:54.434 [2024-10-17 20:15:39.931072] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:54.434 [2024-10-17 20:15:39.931161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:54.434 pt1 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.434 "name": "raid_bdev1", 00:18:54.434 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:54.434 "strip_size_kb": 64, 00:18:54.434 "state": "configuring", 00:18:54.434 "raid_level": "raid5f", 00:18:54.434 "superblock": true, 00:18:54.434 "num_base_bdevs": 4, 00:18:54.434 "num_base_bdevs_discovered": 1, 00:18:54.434 "num_base_bdevs_operational": 4, 00:18:54.434 "base_bdevs_list": [ 00:18:54.434 { 00:18:54.434 "name": "pt1", 00:18:54.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:54.434 "is_configured": true, 00:18:54.434 "data_offset": 2048, 00:18:54.434 "data_size": 63488 00:18:54.434 }, 00:18:54.434 { 00:18:54.434 "name": null, 00:18:54.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:54.434 "is_configured": false, 00:18:54.434 "data_offset": 2048, 00:18:54.434 "data_size": 63488 00:18:54.434 }, 00:18:54.434 { 00:18:54.434 "name": null, 00:18:54.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:54.434 "is_configured": false, 00:18:54.434 "data_offset": 2048, 00:18:54.434 "data_size": 63488 00:18:54.434 }, 00:18:54.434 { 00:18:54.434 "name": null, 00:18:54.434 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:54.434 "is_configured": false, 00:18:54.434 "data_offset": 2048, 00:18:54.434 "data_size": 63488 00:18:54.434 } 00:18:54.434 ] 00:18:54.434 }' 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.434 20:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.001 [2024-10-17 20:15:40.451861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:55.001 [2024-10-17 20:15:40.451974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.001 [2024-10-17 20:15:40.452000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:55.001 [2024-10-17 20:15:40.452073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.001 [2024-10-17 20:15:40.452713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.001 [2024-10-17 20:15:40.452748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:55.001 [2024-10-17 20:15:40.452859] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:55.001 [2024-10-17 20:15:40.452910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:55.001 pt2 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.001 [2024-10-17 20:15:40.459848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.001 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.001 "name": "raid_bdev1", 00:18:55.001 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:55.001 "strip_size_kb": 64, 00:18:55.001 "state": "configuring", 00:18:55.001 "raid_level": "raid5f", 00:18:55.001 "superblock": true, 00:18:55.001 "num_base_bdevs": 4, 00:18:55.001 "num_base_bdevs_discovered": 1, 00:18:55.001 "num_base_bdevs_operational": 4, 00:18:55.001 "base_bdevs_list": [ 00:18:55.001 { 00:18:55.001 "name": "pt1", 00:18:55.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:55.001 "is_configured": true, 00:18:55.001 "data_offset": 2048, 00:18:55.001 "data_size": 63488 00:18:55.001 }, 00:18:55.001 { 00:18:55.001 "name": null, 00:18:55.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.001 "is_configured": false, 00:18:55.002 "data_offset": 0, 00:18:55.002 "data_size": 63488 00:18:55.002 }, 00:18:55.002 { 00:18:55.002 "name": null, 00:18:55.002 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:55.002 "is_configured": false, 00:18:55.002 "data_offset": 2048, 00:18:55.002 "data_size": 63488 00:18:55.002 }, 00:18:55.002 { 00:18:55.002 "name": null, 00:18:55.002 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:55.002 "is_configured": false, 00:18:55.002 "data_offset": 2048, 00:18:55.002 "data_size": 63488 00:18:55.002 } 00:18:55.002 ] 00:18:55.002 }' 00:18:55.002 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.002 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.569 [2024-10-17 20:15:40.964101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:55.569 [2024-10-17 20:15:40.964170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.569 [2024-10-17 20:15:40.964210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:55.569 [2024-10-17 20:15:40.964226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.569 [2024-10-17 20:15:40.964800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.569 [2024-10-17 20:15:40.964824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:55.569 [2024-10-17 20:15:40.964925] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:55.569 [2024-10-17 20:15:40.964954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:55.569 pt2 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.569 [2024-10-17 20:15:40.976025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:55.569 [2024-10-17 20:15:40.976123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.569 [2024-10-17 20:15:40.976151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:55.569 [2024-10-17 20:15:40.976164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.569 [2024-10-17 20:15:40.976615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.569 [2024-10-17 20:15:40.976645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:55.569 [2024-10-17 20:15:40.976731] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:55.569 [2024-10-17 20:15:40.976758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:55.569 pt3 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.569 [2024-10-17 20:15:40.983978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:55.569 [2024-10-17 20:15:40.984083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.569 [2024-10-17 20:15:40.984109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:55.569 [2024-10-17 20:15:40.984121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.569 [2024-10-17 20:15:40.984605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.569 [2024-10-17 20:15:40.984635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:55.569 [2024-10-17 20:15:40.984725] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:55.569 [2024-10-17 20:15:40.984752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:55.569 [2024-10-17 20:15:40.984929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:55.569 [2024-10-17 20:15:40.984944] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:55.569 [2024-10-17 20:15:40.985270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:55.569 [2024-10-17 20:15:40.991930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:55.569 [2024-10-17 20:15:40.991964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:55.569 [2024-10-17 20:15:40.992241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.569 pt4 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.569 20:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.569 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.569 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.569 "name": "raid_bdev1", 00:18:55.569 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:55.569 "strip_size_kb": 64, 00:18:55.569 "state": "online", 00:18:55.569 "raid_level": "raid5f", 00:18:55.569 "superblock": true, 00:18:55.569 "num_base_bdevs": 4, 00:18:55.569 "num_base_bdevs_discovered": 4, 00:18:55.569 "num_base_bdevs_operational": 4, 00:18:55.569 "base_bdevs_list": [ 00:18:55.569 { 00:18:55.569 "name": "pt1", 00:18:55.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:55.569 "is_configured": true, 00:18:55.569 "data_offset": 2048, 00:18:55.569 "data_size": 63488 00:18:55.569 }, 00:18:55.569 { 00:18:55.569 "name": "pt2", 00:18:55.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.569 "is_configured": true, 00:18:55.569 "data_offset": 2048, 00:18:55.569 "data_size": 63488 00:18:55.569 }, 00:18:55.569 { 00:18:55.569 "name": "pt3", 00:18:55.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:55.569 "is_configured": true, 00:18:55.569 "data_offset": 2048, 00:18:55.569 "data_size": 63488 00:18:55.569 }, 00:18:55.569 { 00:18:55.569 "name": "pt4", 00:18:55.569 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:55.569 "is_configured": true, 00:18:55.569 "data_offset": 2048, 00:18:55.569 "data_size": 63488 00:18:55.569 } 00:18:55.569 ] 00:18:55.569 }' 00:18:55.569 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.569 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.897 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.177 [2024-10-17 20:15:41.532268] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:56.177 "name": "raid_bdev1", 00:18:56.177 "aliases": [ 00:18:56.177 "c368ccc0-9d42-4959-a964-f785a4809ae8" 00:18:56.177 ], 00:18:56.177 "product_name": "Raid Volume", 00:18:56.177 "block_size": 512, 00:18:56.177 "num_blocks": 190464, 00:18:56.177 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:56.177 "assigned_rate_limits": { 00:18:56.177 "rw_ios_per_sec": 0, 00:18:56.177 "rw_mbytes_per_sec": 0, 00:18:56.177 "r_mbytes_per_sec": 0, 00:18:56.177 "w_mbytes_per_sec": 0 00:18:56.177 }, 00:18:56.177 "claimed": false, 00:18:56.177 "zoned": false, 00:18:56.177 "supported_io_types": { 00:18:56.177 "read": true, 00:18:56.177 "write": true, 00:18:56.177 "unmap": false, 00:18:56.177 "flush": false, 00:18:56.177 "reset": true, 00:18:56.177 "nvme_admin": false, 00:18:56.177 "nvme_io": false, 00:18:56.177 "nvme_io_md": false, 00:18:56.177 "write_zeroes": true, 00:18:56.177 "zcopy": false, 00:18:56.177 "get_zone_info": false, 00:18:56.177 "zone_management": false, 00:18:56.177 "zone_append": false, 00:18:56.177 "compare": false, 00:18:56.177 "compare_and_write": false, 00:18:56.177 "abort": false, 00:18:56.177 "seek_hole": false, 00:18:56.177 "seek_data": false, 00:18:56.177 "copy": false, 00:18:56.177 "nvme_iov_md": false 00:18:56.177 }, 00:18:56.177 "driver_specific": { 00:18:56.177 "raid": { 00:18:56.177 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:56.177 "strip_size_kb": 64, 00:18:56.177 "state": "online", 00:18:56.177 "raid_level": "raid5f", 00:18:56.177 "superblock": true, 00:18:56.177 "num_base_bdevs": 4, 00:18:56.177 "num_base_bdevs_discovered": 4, 00:18:56.177 "num_base_bdevs_operational": 4, 00:18:56.177 "base_bdevs_list": [ 00:18:56.177 { 00:18:56.177 "name": "pt1", 00:18:56.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:56.177 "is_configured": true, 00:18:56.177 "data_offset": 2048, 00:18:56.177 "data_size": 63488 00:18:56.177 }, 00:18:56.177 { 00:18:56.177 "name": "pt2", 00:18:56.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:56.177 "is_configured": true, 00:18:56.177 "data_offset": 2048, 00:18:56.177 "data_size": 63488 00:18:56.177 }, 00:18:56.177 { 00:18:56.177 "name": "pt3", 00:18:56.177 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:56.177 "is_configured": true, 00:18:56.177 "data_offset": 2048, 00:18:56.177 "data_size": 63488 00:18:56.177 }, 00:18:56.177 { 00:18:56.177 "name": "pt4", 00:18:56.177 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:56.177 "is_configured": true, 00:18:56.177 "data_offset": 2048, 00:18:56.177 "data_size": 63488 00:18:56.177 } 00:18:56.177 ] 00:18:56.177 } 00:18:56.177 } 00:18:56.177 }' 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:56.177 pt2 00:18:56.177 pt3 00:18:56.177 pt4' 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:56.177 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.178 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.178 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.178 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.178 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:56.178 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:56.178 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.178 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:56.178 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.178 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.178 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.178 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.437 [2024-10-17 20:15:41.904276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c368ccc0-9d42-4959-a964-f785a4809ae8 '!=' c368ccc0-9d42-4959-a964-f785a4809ae8 ']' 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.437 [2024-10-17 20:15:41.956056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.437 20:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.437 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.437 "name": "raid_bdev1", 00:18:56.437 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:56.437 "strip_size_kb": 64, 00:18:56.437 "state": "online", 00:18:56.437 "raid_level": "raid5f", 00:18:56.437 "superblock": true, 00:18:56.437 "num_base_bdevs": 4, 00:18:56.437 "num_base_bdevs_discovered": 3, 00:18:56.437 "num_base_bdevs_operational": 3, 00:18:56.437 "base_bdevs_list": [ 00:18:56.437 { 00:18:56.437 "name": null, 00:18:56.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.437 "is_configured": false, 00:18:56.437 "data_offset": 0, 00:18:56.437 "data_size": 63488 00:18:56.437 }, 00:18:56.437 { 00:18:56.437 "name": "pt2", 00:18:56.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:56.437 "is_configured": true, 00:18:56.437 "data_offset": 2048, 00:18:56.437 "data_size": 63488 00:18:56.437 }, 00:18:56.437 { 00:18:56.437 "name": "pt3", 00:18:56.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:56.437 "is_configured": true, 00:18:56.437 "data_offset": 2048, 00:18:56.437 "data_size": 63488 00:18:56.437 }, 00:18:56.437 { 00:18:56.437 "name": "pt4", 00:18:56.437 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:56.437 "is_configured": true, 00:18:56.437 "data_offset": 2048, 00:18:56.437 "data_size": 63488 00:18:56.437 } 00:18:56.437 ] 00:18:56.437 }' 00:18:56.437 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.437 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.004 [2024-10-17 20:15:42.504347] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:57.004 [2024-10-17 20:15:42.504390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:57.004 [2024-10-17 20:15:42.504488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.004 [2024-10-17 20:15:42.504600] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.004 [2024-10-17 20:15:42.504617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.004 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.005 [2024-10-17 20:15:42.596318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:57.005 [2024-10-17 20:15:42.596410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.005 [2024-10-17 20:15:42.596449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:57.005 [2024-10-17 20:15:42.596467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.005 [2024-10-17 20:15:42.599785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.005 [2024-10-17 20:15:42.599836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:57.005 [2024-10-17 20:15:42.599964] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:57.005 [2024-10-17 20:15:42.600069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:57.005 pt2 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.005 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.264 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.264 "name": "raid_bdev1", 00:18:57.264 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:57.264 "strip_size_kb": 64, 00:18:57.264 "state": "configuring", 00:18:57.264 "raid_level": "raid5f", 00:18:57.264 "superblock": true, 00:18:57.264 "num_base_bdevs": 4, 00:18:57.264 "num_base_bdevs_discovered": 1, 00:18:57.264 "num_base_bdevs_operational": 3, 00:18:57.264 "base_bdevs_list": [ 00:18:57.264 { 00:18:57.264 "name": null, 00:18:57.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.264 "is_configured": false, 00:18:57.264 "data_offset": 2048, 00:18:57.264 "data_size": 63488 00:18:57.264 }, 00:18:57.264 { 00:18:57.264 "name": "pt2", 00:18:57.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.264 "is_configured": true, 00:18:57.264 "data_offset": 2048, 00:18:57.264 "data_size": 63488 00:18:57.264 }, 00:18:57.264 { 00:18:57.264 "name": null, 00:18:57.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:57.264 "is_configured": false, 00:18:57.264 "data_offset": 2048, 00:18:57.264 "data_size": 63488 00:18:57.264 }, 00:18:57.264 { 00:18:57.264 "name": null, 00:18:57.264 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:57.264 "is_configured": false, 00:18:57.264 "data_offset": 2048, 00:18:57.264 "data_size": 63488 00:18:57.264 } 00:18:57.264 ] 00:18:57.264 }' 00:18:57.264 20:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.264 20:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.523 [2024-10-17 20:15:43.132519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:57.523 [2024-10-17 20:15:43.132625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.523 [2024-10-17 20:15:43.132655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:57.523 [2024-10-17 20:15:43.132668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.523 [2024-10-17 20:15:43.133232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.523 [2024-10-17 20:15:43.133255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:57.523 [2024-10-17 20:15:43.133379] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:57.523 [2024-10-17 20:15:43.133419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:57.523 pt3 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.523 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.781 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.781 "name": "raid_bdev1", 00:18:57.781 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:57.781 "strip_size_kb": 64, 00:18:57.781 "state": "configuring", 00:18:57.781 "raid_level": "raid5f", 00:18:57.781 "superblock": true, 00:18:57.781 "num_base_bdevs": 4, 00:18:57.781 "num_base_bdevs_discovered": 2, 00:18:57.781 "num_base_bdevs_operational": 3, 00:18:57.781 "base_bdevs_list": [ 00:18:57.781 { 00:18:57.781 "name": null, 00:18:57.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.781 "is_configured": false, 00:18:57.781 "data_offset": 2048, 00:18:57.781 "data_size": 63488 00:18:57.781 }, 00:18:57.781 { 00:18:57.781 "name": "pt2", 00:18:57.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.781 "is_configured": true, 00:18:57.781 "data_offset": 2048, 00:18:57.781 "data_size": 63488 00:18:57.781 }, 00:18:57.781 { 00:18:57.781 "name": "pt3", 00:18:57.781 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:57.781 "is_configured": true, 00:18:57.781 "data_offset": 2048, 00:18:57.781 "data_size": 63488 00:18:57.781 }, 00:18:57.781 { 00:18:57.781 "name": null, 00:18:57.781 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:57.781 "is_configured": false, 00:18:57.781 "data_offset": 2048, 00:18:57.781 "data_size": 63488 00:18:57.781 } 00:18:57.781 ] 00:18:57.781 }' 00:18:57.781 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.781 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.039 [2024-10-17 20:15:43.656667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:58.039 [2024-10-17 20:15:43.656765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.039 [2024-10-17 20:15:43.656798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:58.039 [2024-10-17 20:15:43.656811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.039 [2024-10-17 20:15:43.657464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.039 [2024-10-17 20:15:43.657495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:58.039 [2024-10-17 20:15:43.657597] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:58.039 [2024-10-17 20:15:43.657633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:58.039 [2024-10-17 20:15:43.657801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:58.039 [2024-10-17 20:15:43.657815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:58.039 [2024-10-17 20:15:43.658134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:58.039 [2024-10-17 20:15:43.664174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:58.039 [2024-10-17 20:15:43.664254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:58.039 [2024-10-17 20:15:43.664651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.039 pt4 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.039 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.298 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.298 "name": "raid_bdev1", 00:18:58.298 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:58.298 "strip_size_kb": 64, 00:18:58.298 "state": "online", 00:18:58.298 "raid_level": "raid5f", 00:18:58.298 "superblock": true, 00:18:58.298 "num_base_bdevs": 4, 00:18:58.298 "num_base_bdevs_discovered": 3, 00:18:58.298 "num_base_bdevs_operational": 3, 00:18:58.298 "base_bdevs_list": [ 00:18:58.298 { 00:18:58.298 "name": null, 00:18:58.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.298 "is_configured": false, 00:18:58.298 "data_offset": 2048, 00:18:58.298 "data_size": 63488 00:18:58.298 }, 00:18:58.298 { 00:18:58.298 "name": "pt2", 00:18:58.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.298 "is_configured": true, 00:18:58.298 "data_offset": 2048, 00:18:58.298 "data_size": 63488 00:18:58.298 }, 00:18:58.298 { 00:18:58.298 "name": "pt3", 00:18:58.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:58.298 "is_configured": true, 00:18:58.298 "data_offset": 2048, 00:18:58.298 "data_size": 63488 00:18:58.298 }, 00:18:58.298 { 00:18:58.298 "name": "pt4", 00:18:58.298 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:58.298 "is_configured": true, 00:18:58.298 "data_offset": 2048, 00:18:58.298 "data_size": 63488 00:18:58.298 } 00:18:58.298 ] 00:18:58.298 }' 00:18:58.298 20:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.298 20:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.556 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:58.556 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.556 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.556 [2024-10-17 20:15:44.199841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.556 [2024-10-17 20:15:44.199878] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.556 [2024-10-17 20:15:44.199981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.557 [2024-10-17 20:15:44.200102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.557 [2024-10-17 20:15:44.200122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:58.557 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.815 [2024-10-17 20:15:44.271826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:58.815 [2024-10-17 20:15:44.272128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.815 [2024-10-17 20:15:44.272167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:58.815 [2024-10-17 20:15:44.272185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.815 [2024-10-17 20:15:44.274959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.815 [2024-10-17 20:15:44.275050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:58.815 [2024-10-17 20:15:44.275162] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:58.815 [2024-10-17 20:15:44.275225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:58.815 [2024-10-17 20:15:44.275399] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:58.815 [2024-10-17 20:15:44.275418] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.815 [2024-10-17 20:15:44.275436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:58.815 [2024-10-17 20:15:44.275495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:58.815 [2024-10-17 20:15:44.275655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:58.815 pt1 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.815 "name": "raid_bdev1", 00:18:58.815 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:58.815 "strip_size_kb": 64, 00:18:58.815 "state": "configuring", 00:18:58.815 "raid_level": "raid5f", 00:18:58.815 "superblock": true, 00:18:58.815 "num_base_bdevs": 4, 00:18:58.815 "num_base_bdevs_discovered": 2, 00:18:58.815 "num_base_bdevs_operational": 3, 00:18:58.815 "base_bdevs_list": [ 00:18:58.815 { 00:18:58.815 "name": null, 00:18:58.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.815 "is_configured": false, 00:18:58.815 "data_offset": 2048, 00:18:58.815 "data_size": 63488 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "name": "pt2", 00:18:58.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.815 "is_configured": true, 00:18:58.815 "data_offset": 2048, 00:18:58.815 "data_size": 63488 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "name": "pt3", 00:18:58.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:58.815 "is_configured": true, 00:18:58.815 "data_offset": 2048, 00:18:58.815 "data_size": 63488 00:18:58.815 }, 00:18:58.815 { 00:18:58.815 "name": null, 00:18:58.815 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:58.815 "is_configured": false, 00:18:58.815 "data_offset": 2048, 00:18:58.815 "data_size": 63488 00:18:58.815 } 00:18:58.815 ] 00:18:58.815 }' 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.815 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.382 [2024-10-17 20:15:44.836097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:59.382 [2024-10-17 20:15:44.836173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.382 [2024-10-17 20:15:44.836230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:59.382 [2024-10-17 20:15:44.836245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.382 [2024-10-17 20:15:44.836839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.382 [2024-10-17 20:15:44.836860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:59.382 [2024-10-17 20:15:44.836959] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:59.382 [2024-10-17 20:15:44.836987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:59.382 [2024-10-17 20:15:44.837206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:59.382 [2024-10-17 20:15:44.837221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:59.382 [2024-10-17 20:15:44.837557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:59.382 [2024-10-17 20:15:44.843570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:59.382 pt4 00:18:59.382 [2024-10-17 20:15:44.843890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:59.382 [2024-10-17 20:15:44.844319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.382 "name": "raid_bdev1", 00:18:59.382 "uuid": "c368ccc0-9d42-4959-a964-f785a4809ae8", 00:18:59.382 "strip_size_kb": 64, 00:18:59.382 "state": "online", 00:18:59.382 "raid_level": "raid5f", 00:18:59.382 "superblock": true, 00:18:59.382 "num_base_bdevs": 4, 00:18:59.382 "num_base_bdevs_discovered": 3, 00:18:59.382 "num_base_bdevs_operational": 3, 00:18:59.382 "base_bdevs_list": [ 00:18:59.382 { 00:18:59.382 "name": null, 00:18:59.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.382 "is_configured": false, 00:18:59.382 "data_offset": 2048, 00:18:59.382 "data_size": 63488 00:18:59.382 }, 00:18:59.382 { 00:18:59.382 "name": "pt2", 00:18:59.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.382 "is_configured": true, 00:18:59.382 "data_offset": 2048, 00:18:59.382 "data_size": 63488 00:18:59.382 }, 00:18:59.382 { 00:18:59.382 "name": "pt3", 00:18:59.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:59.382 "is_configured": true, 00:18:59.382 "data_offset": 2048, 00:18:59.382 "data_size": 63488 00:18:59.382 }, 00:18:59.382 { 00:18:59.382 "name": "pt4", 00:18:59.382 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:59.382 "is_configured": true, 00:18:59.382 "data_offset": 2048, 00:18:59.382 "data_size": 63488 00:18:59.382 } 00:18:59.382 ] 00:18:59.382 }' 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.382 20:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:59.949 [2024-10-17 20:15:45.435791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c368ccc0-9d42-4959-a964-f785a4809ae8 '!=' c368ccc0-9d42-4959-a964-f785a4809ae8 ']' 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84418 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84418 ']' 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84418 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84418 00:18:59.949 killing process with pid 84418 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84418' 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 84418 00:18:59.949 [2024-10-17 20:15:45.518689] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:59.949 20:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 84418 00:18:59.949 [2024-10-17 20:15:45.518796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.949 [2024-10-17 20:15:45.518880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.949 [2024-10-17 20:15:45.518899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:00.206 [2024-10-17 20:15:45.835120] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:01.583 20:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:01.583 00:19:01.583 real 0m9.334s 00:19:01.583 user 0m15.376s 00:19:01.583 sys 0m1.384s 00:19:01.583 ************************************ 00:19:01.583 END TEST raid5f_superblock_test 00:19:01.583 ************************************ 00:19:01.583 20:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:01.583 20:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.583 20:15:46 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:01.583 20:15:46 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:19:01.583 20:15:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:01.583 20:15:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:01.583 20:15:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.583 ************************************ 00:19:01.583 START TEST raid5f_rebuild_test 00:19:01.583 ************************************ 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84905 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84905 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 84905 ']' 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.583 20:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.583 [2024-10-17 20:15:46.992024] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:19:01.583 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:01.583 Zero copy mechanism will not be used. 00:19:01.583 [2024-10-17 20:15:46.992465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84905 ] 00:19:01.583 [2024-10-17 20:15:47.171199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.841 [2024-10-17 20:15:47.348111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.099 [2024-10-17 20:15:47.605614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.099 [2024-10-17 20:15:47.605725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.357 20:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.357 20:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:19:02.357 20:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:02.357 20:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:02.357 20:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.357 20:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.616 BaseBdev1_malloc 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.616 [2024-10-17 20:15:48.036419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:02.616 [2024-10-17 20:15:48.036779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.616 [2024-10-17 20:15:48.036826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:02.616 [2024-10-17 20:15:48.036847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.616 [2024-10-17 20:15:48.039837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.616 [2024-10-17 20:15:48.040033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:02.616 BaseBdev1 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.616 BaseBdev2_malloc 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.616 [2024-10-17 20:15:48.096222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:02.616 [2024-10-17 20:15:48.096320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.616 [2024-10-17 20:15:48.096352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:02.616 [2024-10-17 20:15:48.096370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.616 [2024-10-17 20:15:48.099303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.616 [2024-10-17 20:15:48.099352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:02.616 BaseBdev2 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.616 BaseBdev3_malloc 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.616 [2024-10-17 20:15:48.159857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:02.616 [2024-10-17 20:15:48.159965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.616 [2024-10-17 20:15:48.160026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:02.616 [2024-10-17 20:15:48.160050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.616 [2024-10-17 20:15:48.163253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.616 [2024-10-17 20:15:48.163305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:02.616 BaseBdev3 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.616 BaseBdev4_malloc 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.616 [2024-10-17 20:15:48.216401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:02.616 [2024-10-17 20:15:48.216529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.616 [2024-10-17 20:15:48.216564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:02.616 [2024-10-17 20:15:48.216583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.616 [2024-10-17 20:15:48.219684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.616 [2024-10-17 20:15:48.219738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:02.616 BaseBdev4 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.616 spare_malloc 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.616 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.874 spare_delay 00:19:02.874 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.874 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:02.874 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.874 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.874 [2024-10-17 20:15:48.285087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:02.874 [2024-10-17 20:15:48.285198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.874 [2024-10-17 20:15:48.285233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:02.874 [2024-10-17 20:15:48.285252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.874 [2024-10-17 20:15:48.288402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.874 [2024-10-17 20:15:48.288693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:02.874 spare 00:19:02.874 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.874 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:02.874 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.874 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.874 [2024-10-17 20:15:48.293162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:02.874 [2024-10-17 20:15:48.295791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:02.875 [2024-10-17 20:15:48.296043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:02.875 [2024-10-17 20:15:48.296142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:02.875 [2024-10-17 20:15:48.296296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:02.875 [2024-10-17 20:15:48.296317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:02.875 [2024-10-17 20:15:48.296682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:02.875 [2024-10-17 20:15:48.303602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:02.875 [2024-10-17 20:15:48.303630] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:02.875 [2024-10-17 20:15:48.303960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.875 "name": "raid_bdev1", 00:19:02.875 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:02.875 "strip_size_kb": 64, 00:19:02.875 "state": "online", 00:19:02.875 "raid_level": "raid5f", 00:19:02.875 "superblock": false, 00:19:02.875 "num_base_bdevs": 4, 00:19:02.875 "num_base_bdevs_discovered": 4, 00:19:02.875 "num_base_bdevs_operational": 4, 00:19:02.875 "base_bdevs_list": [ 00:19:02.875 { 00:19:02.875 "name": "BaseBdev1", 00:19:02.875 "uuid": "22f0a308-3e01-5120-916c-6db415f07862", 00:19:02.875 "is_configured": true, 00:19:02.875 "data_offset": 0, 00:19:02.875 "data_size": 65536 00:19:02.875 }, 00:19:02.875 { 00:19:02.875 "name": "BaseBdev2", 00:19:02.875 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:02.875 "is_configured": true, 00:19:02.875 "data_offset": 0, 00:19:02.875 "data_size": 65536 00:19:02.875 }, 00:19:02.875 { 00:19:02.875 "name": "BaseBdev3", 00:19:02.875 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:02.875 "is_configured": true, 00:19:02.875 "data_offset": 0, 00:19:02.875 "data_size": 65536 00:19:02.875 }, 00:19:02.875 { 00:19:02.875 "name": "BaseBdev4", 00:19:02.875 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:02.875 "is_configured": true, 00:19:02.875 "data_offset": 0, 00:19:02.875 "data_size": 65536 00:19:02.875 } 00:19:02.875 ] 00:19:02.875 }' 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.875 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.442 [2024-10-17 20:15:48.816995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:03.442 20:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:03.701 [2024-10-17 20:15:49.196919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:03.701 /dev/nbd0 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:03.701 1+0 records in 00:19:03.701 1+0 records out 00:19:03.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367388 s, 11.1 MB/s 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:03.701 20:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:19:04.636 512+0 records in 00:19:04.636 512+0 records out 00:19:04.636 100663296 bytes (101 MB, 96 MiB) copied, 0.743297 s, 135 MB/s 00:19:04.636 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:04.636 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:04.636 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:04.636 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:04.636 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:04.636 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.636 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:04.896 [2024-10-17 20:15:50.334623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.896 [2024-10-17 20:15:50.350909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.896 "name": "raid_bdev1", 00:19:04.896 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:04.896 "strip_size_kb": 64, 00:19:04.896 "state": "online", 00:19:04.896 "raid_level": "raid5f", 00:19:04.896 "superblock": false, 00:19:04.896 "num_base_bdevs": 4, 00:19:04.896 "num_base_bdevs_discovered": 3, 00:19:04.896 "num_base_bdevs_operational": 3, 00:19:04.896 "base_bdevs_list": [ 00:19:04.896 { 00:19:04.896 "name": null, 00:19:04.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.896 "is_configured": false, 00:19:04.896 "data_offset": 0, 00:19:04.896 "data_size": 65536 00:19:04.896 }, 00:19:04.896 { 00:19:04.896 "name": "BaseBdev2", 00:19:04.896 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:04.896 "is_configured": true, 00:19:04.896 "data_offset": 0, 00:19:04.896 "data_size": 65536 00:19:04.896 }, 00:19:04.896 { 00:19:04.896 "name": "BaseBdev3", 00:19:04.896 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:04.896 "is_configured": true, 00:19:04.896 "data_offset": 0, 00:19:04.896 "data_size": 65536 00:19:04.896 }, 00:19:04.896 { 00:19:04.896 "name": "BaseBdev4", 00:19:04.896 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:04.896 "is_configured": true, 00:19:04.896 "data_offset": 0, 00:19:04.896 "data_size": 65536 00:19:04.896 } 00:19:04.896 ] 00:19:04.896 }' 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.896 20:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.465 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:05.465 20:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.465 20:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.465 [2024-10-17 20:15:50.867050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:05.465 [2024-10-17 20:15:50.881251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:05.465 20:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.465 20:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:05.465 [2024-10-17 20:15:50.889805] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.400 "name": "raid_bdev1", 00:19:06.400 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:06.400 "strip_size_kb": 64, 00:19:06.400 "state": "online", 00:19:06.400 "raid_level": "raid5f", 00:19:06.400 "superblock": false, 00:19:06.400 "num_base_bdevs": 4, 00:19:06.400 "num_base_bdevs_discovered": 4, 00:19:06.400 "num_base_bdevs_operational": 4, 00:19:06.400 "process": { 00:19:06.400 "type": "rebuild", 00:19:06.400 "target": "spare", 00:19:06.400 "progress": { 00:19:06.400 "blocks": 17280, 00:19:06.400 "percent": 8 00:19:06.400 } 00:19:06.400 }, 00:19:06.400 "base_bdevs_list": [ 00:19:06.400 { 00:19:06.400 "name": "spare", 00:19:06.400 "uuid": "b96f7348-3de9-5f97-bec2-cfd1a443aab5", 00:19:06.400 "is_configured": true, 00:19:06.400 "data_offset": 0, 00:19:06.400 "data_size": 65536 00:19:06.400 }, 00:19:06.400 { 00:19:06.400 "name": "BaseBdev2", 00:19:06.400 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:06.400 "is_configured": true, 00:19:06.400 "data_offset": 0, 00:19:06.400 "data_size": 65536 00:19:06.400 }, 00:19:06.400 { 00:19:06.400 "name": "BaseBdev3", 00:19:06.400 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:06.400 "is_configured": true, 00:19:06.400 "data_offset": 0, 00:19:06.400 "data_size": 65536 00:19:06.400 }, 00:19:06.400 { 00:19:06.400 "name": "BaseBdev4", 00:19:06.400 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:06.400 "is_configured": true, 00:19:06.400 "data_offset": 0, 00:19:06.400 "data_size": 65536 00:19:06.400 } 00:19:06.400 ] 00:19:06.400 }' 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.400 20:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.400 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.400 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:06.400 20:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.400 20:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.658 [2024-10-17 20:15:52.051216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.658 [2024-10-17 20:15:52.102178] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:06.658 [2024-10-17 20:15:52.102722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.658 [2024-10-17 20:15:52.102772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.658 [2024-10-17 20:15:52.102789] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:06.658 20:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.658 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:06.658 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.658 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.658 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:06.658 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.658 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:06.658 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.658 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.659 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.659 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.659 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.659 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.659 20:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.659 20:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.659 20:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.659 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.659 "name": "raid_bdev1", 00:19:06.659 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:06.659 "strip_size_kb": 64, 00:19:06.659 "state": "online", 00:19:06.659 "raid_level": "raid5f", 00:19:06.659 "superblock": false, 00:19:06.659 "num_base_bdevs": 4, 00:19:06.659 "num_base_bdevs_discovered": 3, 00:19:06.659 "num_base_bdevs_operational": 3, 00:19:06.659 "base_bdevs_list": [ 00:19:06.659 { 00:19:06.659 "name": null, 00:19:06.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.659 "is_configured": false, 00:19:06.659 "data_offset": 0, 00:19:06.659 "data_size": 65536 00:19:06.659 }, 00:19:06.659 { 00:19:06.659 "name": "BaseBdev2", 00:19:06.659 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:06.659 "is_configured": true, 00:19:06.659 "data_offset": 0, 00:19:06.659 "data_size": 65536 00:19:06.659 }, 00:19:06.659 { 00:19:06.659 "name": "BaseBdev3", 00:19:06.659 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:06.659 "is_configured": true, 00:19:06.659 "data_offset": 0, 00:19:06.659 "data_size": 65536 00:19:06.659 }, 00:19:06.659 { 00:19:06.659 "name": "BaseBdev4", 00:19:06.659 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:06.659 "is_configured": true, 00:19:06.659 "data_offset": 0, 00:19:06.659 "data_size": 65536 00:19:06.659 } 00:19:06.659 ] 00:19:06.659 }' 00:19:06.659 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.659 20:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.225 "name": "raid_bdev1", 00:19:07.225 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:07.225 "strip_size_kb": 64, 00:19:07.225 "state": "online", 00:19:07.225 "raid_level": "raid5f", 00:19:07.225 "superblock": false, 00:19:07.225 "num_base_bdevs": 4, 00:19:07.225 "num_base_bdevs_discovered": 3, 00:19:07.225 "num_base_bdevs_operational": 3, 00:19:07.225 "base_bdevs_list": [ 00:19:07.225 { 00:19:07.225 "name": null, 00:19:07.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.225 "is_configured": false, 00:19:07.225 "data_offset": 0, 00:19:07.225 "data_size": 65536 00:19:07.225 }, 00:19:07.225 { 00:19:07.225 "name": "BaseBdev2", 00:19:07.225 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:07.225 "is_configured": true, 00:19:07.225 "data_offset": 0, 00:19:07.225 "data_size": 65536 00:19:07.225 }, 00:19:07.225 { 00:19:07.225 "name": "BaseBdev3", 00:19:07.225 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:07.225 "is_configured": true, 00:19:07.225 "data_offset": 0, 00:19:07.225 "data_size": 65536 00:19:07.225 }, 00:19:07.225 { 00:19:07.225 "name": "BaseBdev4", 00:19:07.225 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:07.225 "is_configured": true, 00:19:07.225 "data_offset": 0, 00:19:07.225 "data_size": 65536 00:19:07.225 } 00:19:07.225 ] 00:19:07.225 }' 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.225 [2024-10-17 20:15:52.834271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:07.225 [2024-10-17 20:15:52.849689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.225 20:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:07.225 [2024-10-17 20:15:52.860388] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:08.600 20:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:08.601 20:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.601 20:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:08.601 20:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:08.601 20:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.601 20:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.601 20:15:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.601 20:15:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.601 20:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.601 20:15:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.601 20:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.601 "name": "raid_bdev1", 00:19:08.601 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:08.601 "strip_size_kb": 64, 00:19:08.601 "state": "online", 00:19:08.601 "raid_level": "raid5f", 00:19:08.601 "superblock": false, 00:19:08.601 "num_base_bdevs": 4, 00:19:08.601 "num_base_bdevs_discovered": 4, 00:19:08.601 "num_base_bdevs_operational": 4, 00:19:08.601 "process": { 00:19:08.601 "type": "rebuild", 00:19:08.601 "target": "spare", 00:19:08.601 "progress": { 00:19:08.601 "blocks": 17280, 00:19:08.601 "percent": 8 00:19:08.601 } 00:19:08.601 }, 00:19:08.601 "base_bdevs_list": [ 00:19:08.601 { 00:19:08.601 "name": "spare", 00:19:08.601 "uuid": "b96f7348-3de9-5f97-bec2-cfd1a443aab5", 00:19:08.601 "is_configured": true, 00:19:08.601 "data_offset": 0, 00:19:08.601 "data_size": 65536 00:19:08.601 }, 00:19:08.601 { 00:19:08.601 "name": "BaseBdev2", 00:19:08.601 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:08.601 "is_configured": true, 00:19:08.601 "data_offset": 0, 00:19:08.601 "data_size": 65536 00:19:08.601 }, 00:19:08.601 { 00:19:08.601 "name": "BaseBdev3", 00:19:08.601 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:08.601 "is_configured": true, 00:19:08.601 "data_offset": 0, 00:19:08.601 "data_size": 65536 00:19:08.601 }, 00:19:08.601 { 00:19:08.601 "name": "BaseBdev4", 00:19:08.601 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:08.601 "is_configured": true, 00:19:08.601 "data_offset": 0, 00:19:08.601 "data_size": 65536 00:19:08.601 } 00:19:08.601 ] 00:19:08.601 }' 00:19:08.601 20:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.601 20:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:08.601 20:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=669 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.601 "name": "raid_bdev1", 00:19:08.601 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:08.601 "strip_size_kb": 64, 00:19:08.601 "state": "online", 00:19:08.601 "raid_level": "raid5f", 00:19:08.601 "superblock": false, 00:19:08.601 "num_base_bdevs": 4, 00:19:08.601 "num_base_bdevs_discovered": 4, 00:19:08.601 "num_base_bdevs_operational": 4, 00:19:08.601 "process": { 00:19:08.601 "type": "rebuild", 00:19:08.601 "target": "spare", 00:19:08.601 "progress": { 00:19:08.601 "blocks": 21120, 00:19:08.601 "percent": 10 00:19:08.601 } 00:19:08.601 }, 00:19:08.601 "base_bdevs_list": [ 00:19:08.601 { 00:19:08.601 "name": "spare", 00:19:08.601 "uuid": "b96f7348-3de9-5f97-bec2-cfd1a443aab5", 00:19:08.601 "is_configured": true, 00:19:08.601 "data_offset": 0, 00:19:08.601 "data_size": 65536 00:19:08.601 }, 00:19:08.601 { 00:19:08.601 "name": "BaseBdev2", 00:19:08.601 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:08.601 "is_configured": true, 00:19:08.601 "data_offset": 0, 00:19:08.601 "data_size": 65536 00:19:08.601 }, 00:19:08.601 { 00:19:08.601 "name": "BaseBdev3", 00:19:08.601 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:08.601 "is_configured": true, 00:19:08.601 "data_offset": 0, 00:19:08.601 "data_size": 65536 00:19:08.601 }, 00:19:08.601 { 00:19:08.601 "name": "BaseBdev4", 00:19:08.601 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:08.601 "is_configured": true, 00:19:08.601 "data_offset": 0, 00:19:08.601 "data_size": 65536 00:19:08.601 } 00:19:08.601 ] 00:19:08.601 }' 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:08.601 20:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:09.537 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:09.537 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.537 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.537 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.537 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.537 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.537 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.537 20:15:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.537 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.537 20:15:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.796 20:15:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.796 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.796 "name": "raid_bdev1", 00:19:09.796 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:09.796 "strip_size_kb": 64, 00:19:09.796 "state": "online", 00:19:09.796 "raid_level": "raid5f", 00:19:09.796 "superblock": false, 00:19:09.796 "num_base_bdevs": 4, 00:19:09.796 "num_base_bdevs_discovered": 4, 00:19:09.796 "num_base_bdevs_operational": 4, 00:19:09.796 "process": { 00:19:09.796 "type": "rebuild", 00:19:09.796 "target": "spare", 00:19:09.796 "progress": { 00:19:09.796 "blocks": 44160, 00:19:09.796 "percent": 22 00:19:09.796 } 00:19:09.796 }, 00:19:09.796 "base_bdevs_list": [ 00:19:09.796 { 00:19:09.796 "name": "spare", 00:19:09.796 "uuid": "b96f7348-3de9-5f97-bec2-cfd1a443aab5", 00:19:09.796 "is_configured": true, 00:19:09.796 "data_offset": 0, 00:19:09.796 "data_size": 65536 00:19:09.796 }, 00:19:09.796 { 00:19:09.796 "name": "BaseBdev2", 00:19:09.796 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:09.796 "is_configured": true, 00:19:09.796 "data_offset": 0, 00:19:09.796 "data_size": 65536 00:19:09.796 }, 00:19:09.796 { 00:19:09.796 "name": "BaseBdev3", 00:19:09.796 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:09.796 "is_configured": true, 00:19:09.796 "data_offset": 0, 00:19:09.796 "data_size": 65536 00:19:09.796 }, 00:19:09.796 { 00:19:09.796 "name": "BaseBdev4", 00:19:09.796 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:09.796 "is_configured": true, 00:19:09.796 "data_offset": 0, 00:19:09.796 "data_size": 65536 00:19:09.796 } 00:19:09.796 ] 00:19:09.796 }' 00:19:09.796 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.796 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.796 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.796 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.796 20:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:10.732 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:10.732 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.732 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.732 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.732 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.732 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.732 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.732 20:15:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.732 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.732 20:15:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.732 20:15:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.991 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.991 "name": "raid_bdev1", 00:19:10.991 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:10.991 "strip_size_kb": 64, 00:19:10.991 "state": "online", 00:19:10.991 "raid_level": "raid5f", 00:19:10.991 "superblock": false, 00:19:10.991 "num_base_bdevs": 4, 00:19:10.991 "num_base_bdevs_discovered": 4, 00:19:10.991 "num_base_bdevs_operational": 4, 00:19:10.991 "process": { 00:19:10.991 "type": "rebuild", 00:19:10.991 "target": "spare", 00:19:10.991 "progress": { 00:19:10.991 "blocks": 65280, 00:19:10.991 "percent": 33 00:19:10.991 } 00:19:10.991 }, 00:19:10.991 "base_bdevs_list": [ 00:19:10.991 { 00:19:10.991 "name": "spare", 00:19:10.991 "uuid": "b96f7348-3de9-5f97-bec2-cfd1a443aab5", 00:19:10.991 "is_configured": true, 00:19:10.991 "data_offset": 0, 00:19:10.991 "data_size": 65536 00:19:10.991 }, 00:19:10.991 { 00:19:10.991 "name": "BaseBdev2", 00:19:10.991 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:10.991 "is_configured": true, 00:19:10.991 "data_offset": 0, 00:19:10.991 "data_size": 65536 00:19:10.991 }, 00:19:10.991 { 00:19:10.991 "name": "BaseBdev3", 00:19:10.991 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:10.991 "is_configured": true, 00:19:10.991 "data_offset": 0, 00:19:10.991 "data_size": 65536 00:19:10.991 }, 00:19:10.991 { 00:19:10.991 "name": "BaseBdev4", 00:19:10.991 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:10.991 "is_configured": true, 00:19:10.991 "data_offset": 0, 00:19:10.991 "data_size": 65536 00:19:10.991 } 00:19:10.991 ] 00:19:10.991 }' 00:19:10.991 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.991 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.991 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.991 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.991 20:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:11.926 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:11.926 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.926 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.926 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.926 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.926 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.926 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.926 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.926 20:15:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.926 20:15:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.926 20:15:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.926 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.926 "name": "raid_bdev1", 00:19:11.926 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:11.926 "strip_size_kb": 64, 00:19:11.926 "state": "online", 00:19:11.926 "raid_level": "raid5f", 00:19:11.926 "superblock": false, 00:19:11.926 "num_base_bdevs": 4, 00:19:11.926 "num_base_bdevs_discovered": 4, 00:19:11.926 "num_base_bdevs_operational": 4, 00:19:11.926 "process": { 00:19:11.926 "type": "rebuild", 00:19:11.926 "target": "spare", 00:19:11.926 "progress": { 00:19:11.926 "blocks": 88320, 00:19:11.926 "percent": 44 00:19:11.926 } 00:19:11.926 }, 00:19:11.926 "base_bdevs_list": [ 00:19:11.926 { 00:19:11.926 "name": "spare", 00:19:11.926 "uuid": "b96f7348-3de9-5f97-bec2-cfd1a443aab5", 00:19:11.926 "is_configured": true, 00:19:11.926 "data_offset": 0, 00:19:11.926 "data_size": 65536 00:19:11.926 }, 00:19:11.926 { 00:19:11.926 "name": "BaseBdev2", 00:19:11.926 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:11.926 "is_configured": true, 00:19:11.926 "data_offset": 0, 00:19:11.926 "data_size": 65536 00:19:11.926 }, 00:19:11.926 { 00:19:11.926 "name": "BaseBdev3", 00:19:11.926 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:11.926 "is_configured": true, 00:19:11.926 "data_offset": 0, 00:19:11.926 "data_size": 65536 00:19:11.926 }, 00:19:11.926 { 00:19:11.926 "name": "BaseBdev4", 00:19:11.926 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:11.926 "is_configured": true, 00:19:11.926 "data_offset": 0, 00:19:11.926 "data_size": 65536 00:19:11.926 } 00:19:11.926 ] 00:19:11.926 }' 00:19:11.926 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.185 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.185 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.185 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.185 20:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:13.122 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:13.122 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.122 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.122 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.122 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.122 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.122 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.122 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.122 20:15:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.122 20:15:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.122 20:15:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.122 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.122 "name": "raid_bdev1", 00:19:13.122 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:13.122 "strip_size_kb": 64, 00:19:13.122 "state": "online", 00:19:13.122 "raid_level": "raid5f", 00:19:13.122 "superblock": false, 00:19:13.122 "num_base_bdevs": 4, 00:19:13.122 "num_base_bdevs_discovered": 4, 00:19:13.122 "num_base_bdevs_operational": 4, 00:19:13.122 "process": { 00:19:13.122 "type": "rebuild", 00:19:13.122 "target": "spare", 00:19:13.122 "progress": { 00:19:13.122 "blocks": 109440, 00:19:13.122 "percent": 55 00:19:13.122 } 00:19:13.122 }, 00:19:13.122 "base_bdevs_list": [ 00:19:13.122 { 00:19:13.122 "name": "spare", 00:19:13.122 "uuid": "b96f7348-3de9-5f97-bec2-cfd1a443aab5", 00:19:13.122 "is_configured": true, 00:19:13.122 "data_offset": 0, 00:19:13.122 "data_size": 65536 00:19:13.122 }, 00:19:13.122 { 00:19:13.122 "name": "BaseBdev2", 00:19:13.122 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:13.122 "is_configured": true, 00:19:13.122 "data_offset": 0, 00:19:13.122 "data_size": 65536 00:19:13.122 }, 00:19:13.122 { 00:19:13.122 "name": "BaseBdev3", 00:19:13.122 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:13.122 "is_configured": true, 00:19:13.122 "data_offset": 0, 00:19:13.122 "data_size": 65536 00:19:13.122 }, 00:19:13.122 { 00:19:13.122 "name": "BaseBdev4", 00:19:13.122 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:13.122 "is_configured": true, 00:19:13.122 "data_offset": 0, 00:19:13.122 "data_size": 65536 00:19:13.122 } 00:19:13.122 ] 00:19:13.122 }' 00:19:13.122 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.380 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.381 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.381 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.381 20:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.317 "name": "raid_bdev1", 00:19:14.317 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:14.317 "strip_size_kb": 64, 00:19:14.317 "state": "online", 00:19:14.317 "raid_level": "raid5f", 00:19:14.317 "superblock": false, 00:19:14.317 "num_base_bdevs": 4, 00:19:14.317 "num_base_bdevs_discovered": 4, 00:19:14.317 "num_base_bdevs_operational": 4, 00:19:14.317 "process": { 00:19:14.317 "type": "rebuild", 00:19:14.317 "target": "spare", 00:19:14.317 "progress": { 00:19:14.317 "blocks": 132480, 00:19:14.317 "percent": 67 00:19:14.317 } 00:19:14.317 }, 00:19:14.317 "base_bdevs_list": [ 00:19:14.317 { 00:19:14.317 "name": "spare", 00:19:14.317 "uuid": "b96f7348-3de9-5f97-bec2-cfd1a443aab5", 00:19:14.317 "is_configured": true, 00:19:14.317 "data_offset": 0, 00:19:14.317 "data_size": 65536 00:19:14.317 }, 00:19:14.317 { 00:19:14.317 "name": "BaseBdev2", 00:19:14.317 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:14.317 "is_configured": true, 00:19:14.317 "data_offset": 0, 00:19:14.317 "data_size": 65536 00:19:14.317 }, 00:19:14.317 { 00:19:14.317 "name": "BaseBdev3", 00:19:14.317 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:14.317 "is_configured": true, 00:19:14.317 "data_offset": 0, 00:19:14.317 "data_size": 65536 00:19:14.317 }, 00:19:14.317 { 00:19:14.317 "name": "BaseBdev4", 00:19:14.317 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:14.317 "is_configured": true, 00:19:14.317 "data_offset": 0, 00:19:14.317 "data_size": 65536 00:19:14.317 } 00:19:14.317 ] 00:19:14.317 }' 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.317 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.576 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.576 20:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.512 20:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.512 20:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.512 20:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.512 20:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.512 20:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.512 20:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.512 20:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.512 20:16:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.512 20:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.512 20:16:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.512 20:16:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.512 20:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.512 "name": "raid_bdev1", 00:19:15.512 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:15.512 "strip_size_kb": 64, 00:19:15.512 "state": "online", 00:19:15.512 "raid_level": "raid5f", 00:19:15.512 "superblock": false, 00:19:15.513 "num_base_bdevs": 4, 00:19:15.513 "num_base_bdevs_discovered": 4, 00:19:15.513 "num_base_bdevs_operational": 4, 00:19:15.513 "process": { 00:19:15.513 "type": "rebuild", 00:19:15.513 "target": "spare", 00:19:15.513 "progress": { 00:19:15.513 "blocks": 153600, 00:19:15.513 "percent": 78 00:19:15.513 } 00:19:15.513 }, 00:19:15.513 "base_bdevs_list": [ 00:19:15.513 { 00:19:15.513 "name": "spare", 00:19:15.513 "uuid": "b96f7348-3de9-5f97-bec2-cfd1a443aab5", 00:19:15.513 "is_configured": true, 00:19:15.513 "data_offset": 0, 00:19:15.513 "data_size": 65536 00:19:15.513 }, 00:19:15.513 { 00:19:15.513 "name": "BaseBdev2", 00:19:15.513 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:15.513 "is_configured": true, 00:19:15.513 "data_offset": 0, 00:19:15.513 "data_size": 65536 00:19:15.513 }, 00:19:15.513 { 00:19:15.513 "name": "BaseBdev3", 00:19:15.513 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:15.513 "is_configured": true, 00:19:15.513 "data_offset": 0, 00:19:15.513 "data_size": 65536 00:19:15.513 }, 00:19:15.513 { 00:19:15.513 "name": "BaseBdev4", 00:19:15.513 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:15.513 "is_configured": true, 00:19:15.513 "data_offset": 0, 00:19:15.513 "data_size": 65536 00:19:15.513 } 00:19:15.513 ] 00:19:15.513 }' 00:19:15.513 20:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.513 20:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.513 20:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.772 20:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.772 20:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.707 "name": "raid_bdev1", 00:19:16.707 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:16.707 "strip_size_kb": 64, 00:19:16.707 "state": "online", 00:19:16.707 "raid_level": "raid5f", 00:19:16.707 "superblock": false, 00:19:16.707 "num_base_bdevs": 4, 00:19:16.707 "num_base_bdevs_discovered": 4, 00:19:16.707 "num_base_bdevs_operational": 4, 00:19:16.707 "process": { 00:19:16.707 "type": "rebuild", 00:19:16.707 "target": "spare", 00:19:16.707 "progress": { 00:19:16.707 "blocks": 176640, 00:19:16.707 "percent": 89 00:19:16.707 } 00:19:16.707 }, 00:19:16.707 "base_bdevs_list": [ 00:19:16.707 { 00:19:16.707 "name": "spare", 00:19:16.707 "uuid": "b96f7348-3de9-5f97-bec2-cfd1a443aab5", 00:19:16.707 "is_configured": true, 00:19:16.707 "data_offset": 0, 00:19:16.707 "data_size": 65536 00:19:16.707 }, 00:19:16.707 { 00:19:16.707 "name": "BaseBdev2", 00:19:16.707 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:16.707 "is_configured": true, 00:19:16.707 "data_offset": 0, 00:19:16.707 "data_size": 65536 00:19:16.707 }, 00:19:16.707 { 00:19:16.707 "name": "BaseBdev3", 00:19:16.707 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:16.707 "is_configured": true, 00:19:16.707 "data_offset": 0, 00:19:16.707 "data_size": 65536 00:19:16.707 }, 00:19:16.707 { 00:19:16.707 "name": "BaseBdev4", 00:19:16.707 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:16.707 "is_configured": true, 00:19:16.707 "data_offset": 0, 00:19:16.707 "data_size": 65536 00:19:16.707 } 00:19:16.707 ] 00:19:16.707 }' 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.707 20:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:17.644 [2024-10-17 20:16:03.261720] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:17.644 [2024-10-17 20:16:03.261848] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:17.644 [2024-10-17 20:16:03.261927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.903 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:17.903 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.903 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.904 "name": "raid_bdev1", 00:19:17.904 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:17.904 "strip_size_kb": 64, 00:19:17.904 "state": "online", 00:19:17.904 "raid_level": "raid5f", 00:19:17.904 "superblock": false, 00:19:17.904 "num_base_bdevs": 4, 00:19:17.904 "num_base_bdevs_discovered": 4, 00:19:17.904 "num_base_bdevs_operational": 4, 00:19:17.904 "base_bdevs_list": [ 00:19:17.904 { 00:19:17.904 "name": "spare", 00:19:17.904 "uuid": "b96f7348-3de9-5f97-bec2-cfd1a443aab5", 00:19:17.904 "is_configured": true, 00:19:17.904 "data_offset": 0, 00:19:17.904 "data_size": 65536 00:19:17.904 }, 00:19:17.904 { 00:19:17.904 "name": "BaseBdev2", 00:19:17.904 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:17.904 "is_configured": true, 00:19:17.904 "data_offset": 0, 00:19:17.904 "data_size": 65536 00:19:17.904 }, 00:19:17.904 { 00:19:17.904 "name": "BaseBdev3", 00:19:17.904 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:17.904 "is_configured": true, 00:19:17.904 "data_offset": 0, 00:19:17.904 "data_size": 65536 00:19:17.904 }, 00:19:17.904 { 00:19:17.904 "name": "BaseBdev4", 00:19:17.904 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:17.904 "is_configured": true, 00:19:17.904 "data_offset": 0, 00:19:17.904 "data_size": 65536 00:19:17.904 } 00:19:17.904 ] 00:19:17.904 }' 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.904 20:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.163 "name": "raid_bdev1", 00:19:18.163 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:18.163 "strip_size_kb": 64, 00:19:18.163 "state": "online", 00:19:18.163 "raid_level": "raid5f", 00:19:18.163 "superblock": false, 00:19:18.163 "num_base_bdevs": 4, 00:19:18.163 "num_base_bdevs_discovered": 4, 00:19:18.163 "num_base_bdevs_operational": 4, 00:19:18.163 "base_bdevs_list": [ 00:19:18.163 { 00:19:18.163 "name": "spare", 00:19:18.163 "uuid": "b96f7348-3de9-5f97-bec2-cfd1a443aab5", 00:19:18.163 "is_configured": true, 00:19:18.163 "data_offset": 0, 00:19:18.163 "data_size": 65536 00:19:18.163 }, 00:19:18.163 { 00:19:18.163 "name": "BaseBdev2", 00:19:18.163 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:18.163 "is_configured": true, 00:19:18.163 "data_offset": 0, 00:19:18.163 "data_size": 65536 00:19:18.163 }, 00:19:18.163 { 00:19:18.163 "name": "BaseBdev3", 00:19:18.163 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:18.163 "is_configured": true, 00:19:18.163 "data_offset": 0, 00:19:18.163 "data_size": 65536 00:19:18.163 }, 00:19:18.163 { 00:19:18.163 "name": "BaseBdev4", 00:19:18.163 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:18.163 "is_configured": true, 00:19:18.163 "data_offset": 0, 00:19:18.163 "data_size": 65536 00:19:18.163 } 00:19:18.163 ] 00:19:18.163 }' 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.163 "name": "raid_bdev1", 00:19:18.163 "uuid": "5154d161-ea9b-4b7c-9d3d-ff3f2be200f2", 00:19:18.163 "strip_size_kb": 64, 00:19:18.163 "state": "online", 00:19:18.163 "raid_level": "raid5f", 00:19:18.163 "superblock": false, 00:19:18.163 "num_base_bdevs": 4, 00:19:18.163 "num_base_bdevs_discovered": 4, 00:19:18.163 "num_base_bdevs_operational": 4, 00:19:18.163 "base_bdevs_list": [ 00:19:18.163 { 00:19:18.163 "name": "spare", 00:19:18.163 "uuid": "b96f7348-3de9-5f97-bec2-cfd1a443aab5", 00:19:18.163 "is_configured": true, 00:19:18.163 "data_offset": 0, 00:19:18.163 "data_size": 65536 00:19:18.163 }, 00:19:18.163 { 00:19:18.163 "name": "BaseBdev2", 00:19:18.163 "uuid": "c3e6e967-2f76-5555-95b0-aff04642fbb1", 00:19:18.163 "is_configured": true, 00:19:18.163 "data_offset": 0, 00:19:18.163 "data_size": 65536 00:19:18.163 }, 00:19:18.163 { 00:19:18.163 "name": "BaseBdev3", 00:19:18.163 "uuid": "b2e1f11c-eeaf-555f-a815-4a11202ee55f", 00:19:18.163 "is_configured": true, 00:19:18.163 "data_offset": 0, 00:19:18.163 "data_size": 65536 00:19:18.163 }, 00:19:18.163 { 00:19:18.163 "name": "BaseBdev4", 00:19:18.163 "uuid": "decb9f90-c65e-5ce5-907f-8b13a4c09855", 00:19:18.163 "is_configured": true, 00:19:18.163 "data_offset": 0, 00:19:18.163 "data_size": 65536 00:19:18.163 } 00:19:18.163 ] 00:19:18.163 }' 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.163 20:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.731 [2024-10-17 20:16:04.242587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:18.731 [2024-10-17 20:16:04.242670] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:18.731 [2024-10-17 20:16:04.242801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.731 [2024-10-17 20:16:04.242945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:18.731 [2024-10-17 20:16:04.242972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:18.731 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:18.990 /dev/nbd0 00:19:18.990 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:18.990 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:18.990 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:18.990 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:19:18.990 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:18.990 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:18.990 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:18.990 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:19:18.990 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:18.990 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:18.991 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:18.991 1+0 records in 00:19:18.991 1+0 records out 00:19:18.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364241 s, 11.2 MB/s 00:19:18.991 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.991 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:19:18.991 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.991 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:18.991 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:19:18.991 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:18.991 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:18.991 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:19.558 /dev/nbd1 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:19.558 1+0 records in 00:19:19.558 1+0 records out 00:19:19.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302367 s, 13.5 MB/s 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:19.558 20:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:19.558 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:19.559 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:19.559 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:19.559 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:19.559 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:19.559 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:19.559 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:19.817 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:19.817 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:19.817 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:19.817 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:19.817 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:19.817 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:20.076 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:20.076 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:20.076 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:20.076 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84905 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 84905 ']' 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 84905 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84905 00:19:20.335 killing process with pid 84905 00:19:20.335 Received shutdown signal, test time was about 60.000000 seconds 00:19:20.335 00:19:20.335 Latency(us) 00:19:20.335 [2024-10-17T20:16:05.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.335 [2024-10-17T20:16:05.989Z] =================================================================================================================== 00:19:20.335 [2024-10-17T20:16:05.989Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84905' 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 84905 00:19:20.335 [2024-10-17 20:16:05.830951] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:20.335 20:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 84905 00:19:20.594 [2024-10-17 20:16:06.243816] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:21.970 ************************************ 00:19:21.970 END TEST raid5f_rebuild_test 00:19:21.970 ************************************ 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:21.971 00:19:21.971 real 0m20.378s 00:19:21.971 user 0m25.269s 00:19:21.971 sys 0m2.561s 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.971 20:16:07 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:19:21.971 20:16:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:21.971 20:16:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:21.971 20:16:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:21.971 ************************************ 00:19:21.971 START TEST raid5f_rebuild_test_sb 00:19:21.971 ************************************ 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85414 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85414 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 85414 ']' 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.971 20:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.971 [2024-10-17 20:16:07.427938] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:19:21.971 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:21.971 Zero copy mechanism will not be used. 00:19:21.971 [2024-10-17 20:16:07.428155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85414 ] 00:19:21.971 [2024-10-17 20:16:07.606091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.229 [2024-10-17 20:16:07.733892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.488 [2024-10-17 20:16:07.925256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:22.488 [2024-10-17 20:16:07.925299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.056 BaseBdev1_malloc 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.056 [2024-10-17 20:16:08.451430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:23.056 [2024-10-17 20:16:08.451557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.056 [2024-10-17 20:16:08.451591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:23.056 [2024-10-17 20:16:08.451610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.056 [2024-10-17 20:16:08.454613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.056 [2024-10-17 20:16:08.454695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:23.056 BaseBdev1 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.056 BaseBdev2_malloc 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.056 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.056 [2024-10-17 20:16:08.504958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:23.057 [2024-10-17 20:16:08.505074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.057 [2024-10-17 20:16:08.505102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:23.057 [2024-10-17 20:16:08.505120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.057 [2024-10-17 20:16:08.507773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.057 [2024-10-17 20:16:08.507836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:23.057 BaseBdev2 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.057 BaseBdev3_malloc 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.057 [2024-10-17 20:16:08.568124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:23.057 [2024-10-17 20:16:08.568217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.057 [2024-10-17 20:16:08.568250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:23.057 [2024-10-17 20:16:08.568270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.057 [2024-10-17 20:16:08.570965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.057 [2024-10-17 20:16:08.571061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:23.057 BaseBdev3 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.057 BaseBdev4_malloc 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.057 [2024-10-17 20:16:08.620436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:23.057 [2024-10-17 20:16:08.620548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.057 [2024-10-17 20:16:08.620579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:23.057 [2024-10-17 20:16:08.620598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.057 [2024-10-17 20:16:08.623459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.057 [2024-10-17 20:16:08.623541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:23.057 BaseBdev4 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.057 spare_malloc 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.057 spare_delay 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.057 [2024-10-17 20:16:08.675725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:23.057 [2024-10-17 20:16:08.675799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.057 [2024-10-17 20:16:08.675830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:23.057 [2024-10-17 20:16:08.675855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.057 [2024-10-17 20:16:08.678728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.057 [2024-10-17 20:16:08.678791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:23.057 spare 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.057 [2024-10-17 20:16:08.683755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.057 [2024-10-17 20:16:08.686097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:23.057 [2024-10-17 20:16:08.686186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:23.057 [2024-10-17 20:16:08.686304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:23.057 [2024-10-17 20:16:08.686588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:23.057 [2024-10-17 20:16:08.686631] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:23.057 [2024-10-17 20:16:08.686966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:23.057 [2024-10-17 20:16:08.693527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:23.057 [2024-10-17 20:16:08.693554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:23.057 [2024-10-17 20:16:08.693845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.057 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.316 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.316 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.316 "name": "raid_bdev1", 00:19:23.316 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:23.316 "strip_size_kb": 64, 00:19:23.316 "state": "online", 00:19:23.316 "raid_level": "raid5f", 00:19:23.316 "superblock": true, 00:19:23.316 "num_base_bdevs": 4, 00:19:23.316 "num_base_bdevs_discovered": 4, 00:19:23.316 "num_base_bdevs_operational": 4, 00:19:23.316 "base_bdevs_list": [ 00:19:23.316 { 00:19:23.316 "name": "BaseBdev1", 00:19:23.316 "uuid": "3bcd1b24-61ba-5030-ac7c-bce8844c6bf8", 00:19:23.316 "is_configured": true, 00:19:23.316 "data_offset": 2048, 00:19:23.316 "data_size": 63488 00:19:23.316 }, 00:19:23.316 { 00:19:23.316 "name": "BaseBdev2", 00:19:23.316 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:23.316 "is_configured": true, 00:19:23.316 "data_offset": 2048, 00:19:23.316 "data_size": 63488 00:19:23.316 }, 00:19:23.316 { 00:19:23.316 "name": "BaseBdev3", 00:19:23.316 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:23.316 "is_configured": true, 00:19:23.316 "data_offset": 2048, 00:19:23.316 "data_size": 63488 00:19:23.316 }, 00:19:23.316 { 00:19:23.316 "name": "BaseBdev4", 00:19:23.316 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:23.316 "is_configured": true, 00:19:23.316 "data_offset": 2048, 00:19:23.316 "data_size": 63488 00:19:23.316 } 00:19:23.316 ] 00:19:23.316 }' 00:19:23.316 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.316 20:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.574 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:23.574 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.574 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.574 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:23.574 [2024-10-17 20:16:09.206012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:23.574 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:23.833 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:24.093 [2024-10-17 20:16:09.637927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:24.093 /dev/nbd0 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.093 1+0 records in 00:19:24.093 1+0 records out 00:19:24.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308798 s, 13.3 MB/s 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:24.093 20:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:19:24.662 496+0 records in 00:19:24.662 496+0 records out 00:19:24.662 97517568 bytes (98 MB, 93 MiB) copied, 0.590908 s, 165 MB/s 00:19:24.662 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:24.662 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:24.662 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:24.662 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:24.662 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:24.662 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:24.662 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:25.230 [2024-10-17 20:16:10.595685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.230 [2024-10-17 20:16:10.607522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.230 "name": "raid_bdev1", 00:19:25.230 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:25.230 "strip_size_kb": 64, 00:19:25.230 "state": "online", 00:19:25.230 "raid_level": "raid5f", 00:19:25.230 "superblock": true, 00:19:25.230 "num_base_bdevs": 4, 00:19:25.230 "num_base_bdevs_discovered": 3, 00:19:25.230 "num_base_bdevs_operational": 3, 00:19:25.230 "base_bdevs_list": [ 00:19:25.230 { 00:19:25.230 "name": null, 00:19:25.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.230 "is_configured": false, 00:19:25.230 "data_offset": 0, 00:19:25.230 "data_size": 63488 00:19:25.230 }, 00:19:25.230 { 00:19:25.230 "name": "BaseBdev2", 00:19:25.230 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:25.230 "is_configured": true, 00:19:25.230 "data_offset": 2048, 00:19:25.230 "data_size": 63488 00:19:25.230 }, 00:19:25.230 { 00:19:25.230 "name": "BaseBdev3", 00:19:25.230 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:25.230 "is_configured": true, 00:19:25.230 "data_offset": 2048, 00:19:25.230 "data_size": 63488 00:19:25.230 }, 00:19:25.230 { 00:19:25.230 "name": "BaseBdev4", 00:19:25.230 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:25.230 "is_configured": true, 00:19:25.230 "data_offset": 2048, 00:19:25.230 "data_size": 63488 00:19:25.230 } 00:19:25.230 ] 00:19:25.230 }' 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.230 20:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.489 20:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:25.489 20:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.489 20:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.489 [2024-10-17 20:16:11.119726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:25.489 [2024-10-17 20:16:11.133933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:19:25.489 20:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.489 20:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:25.748 [2024-10-17 20:16:11.143196] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:26.684 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.684 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.684 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:26.684 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:26.684 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.684 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.684 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.684 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.684 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.684 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.684 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.684 "name": "raid_bdev1", 00:19:26.684 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:26.684 "strip_size_kb": 64, 00:19:26.684 "state": "online", 00:19:26.684 "raid_level": "raid5f", 00:19:26.684 "superblock": true, 00:19:26.684 "num_base_bdevs": 4, 00:19:26.684 "num_base_bdevs_discovered": 4, 00:19:26.684 "num_base_bdevs_operational": 4, 00:19:26.684 "process": { 00:19:26.684 "type": "rebuild", 00:19:26.684 "target": "spare", 00:19:26.684 "progress": { 00:19:26.684 "blocks": 17280, 00:19:26.684 "percent": 9 00:19:26.684 } 00:19:26.684 }, 00:19:26.684 "base_bdevs_list": [ 00:19:26.684 { 00:19:26.684 "name": "spare", 00:19:26.684 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:26.684 "is_configured": true, 00:19:26.684 "data_offset": 2048, 00:19:26.684 "data_size": 63488 00:19:26.684 }, 00:19:26.684 { 00:19:26.684 "name": "BaseBdev2", 00:19:26.684 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:26.684 "is_configured": true, 00:19:26.684 "data_offset": 2048, 00:19:26.684 "data_size": 63488 00:19:26.684 }, 00:19:26.684 { 00:19:26.684 "name": "BaseBdev3", 00:19:26.685 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:26.685 "is_configured": true, 00:19:26.685 "data_offset": 2048, 00:19:26.685 "data_size": 63488 00:19:26.685 }, 00:19:26.685 { 00:19:26.685 "name": "BaseBdev4", 00:19:26.685 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:26.685 "is_configured": true, 00:19:26.685 "data_offset": 2048, 00:19:26.685 "data_size": 63488 00:19:26.685 } 00:19:26.685 ] 00:19:26.685 }' 00:19:26.685 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.685 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:26.685 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.685 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:26.685 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:26.685 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.685 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.685 [2024-10-17 20:16:12.320655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:26.943 [2024-10-17 20:16:12.354615] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:26.943 [2024-10-17 20:16:12.354727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.943 [2024-10-17 20:16:12.354753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:26.943 [2024-10-17 20:16:12.354772] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:26.943 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.943 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:26.943 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.943 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.943 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:26.943 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:26.943 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:26.943 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.943 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.943 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.943 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.943 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.944 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.944 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.944 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.944 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.944 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.944 "name": "raid_bdev1", 00:19:26.944 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:26.944 "strip_size_kb": 64, 00:19:26.944 "state": "online", 00:19:26.944 "raid_level": "raid5f", 00:19:26.944 "superblock": true, 00:19:26.944 "num_base_bdevs": 4, 00:19:26.944 "num_base_bdevs_discovered": 3, 00:19:26.944 "num_base_bdevs_operational": 3, 00:19:26.944 "base_bdevs_list": [ 00:19:26.944 { 00:19:26.944 "name": null, 00:19:26.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.944 "is_configured": false, 00:19:26.944 "data_offset": 0, 00:19:26.944 "data_size": 63488 00:19:26.944 }, 00:19:26.944 { 00:19:26.944 "name": "BaseBdev2", 00:19:26.944 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:26.944 "is_configured": true, 00:19:26.944 "data_offset": 2048, 00:19:26.944 "data_size": 63488 00:19:26.944 }, 00:19:26.944 { 00:19:26.944 "name": "BaseBdev3", 00:19:26.944 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:26.944 "is_configured": true, 00:19:26.944 "data_offset": 2048, 00:19:26.944 "data_size": 63488 00:19:26.944 }, 00:19:26.944 { 00:19:26.944 "name": "BaseBdev4", 00:19:26.944 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:26.944 "is_configured": true, 00:19:26.944 "data_offset": 2048, 00:19:26.944 "data_size": 63488 00:19:26.944 } 00:19:26.944 ] 00:19:26.944 }' 00:19:26.944 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.944 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.512 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.512 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.512 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:27.512 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:27.512 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.512 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.512 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.512 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.512 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.512 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.512 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.512 "name": "raid_bdev1", 00:19:27.512 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:27.512 "strip_size_kb": 64, 00:19:27.512 "state": "online", 00:19:27.512 "raid_level": "raid5f", 00:19:27.512 "superblock": true, 00:19:27.512 "num_base_bdevs": 4, 00:19:27.512 "num_base_bdevs_discovered": 3, 00:19:27.512 "num_base_bdevs_operational": 3, 00:19:27.512 "base_bdevs_list": [ 00:19:27.512 { 00:19:27.512 "name": null, 00:19:27.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.512 "is_configured": false, 00:19:27.512 "data_offset": 0, 00:19:27.512 "data_size": 63488 00:19:27.512 }, 00:19:27.512 { 00:19:27.512 "name": "BaseBdev2", 00:19:27.512 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:27.512 "is_configured": true, 00:19:27.512 "data_offset": 2048, 00:19:27.512 "data_size": 63488 00:19:27.512 }, 00:19:27.512 { 00:19:27.512 "name": "BaseBdev3", 00:19:27.512 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:27.512 "is_configured": true, 00:19:27.512 "data_offset": 2048, 00:19:27.512 "data_size": 63488 00:19:27.512 }, 00:19:27.512 { 00:19:27.512 "name": "BaseBdev4", 00:19:27.512 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:27.512 "is_configured": true, 00:19:27.512 "data_offset": 2048, 00:19:27.512 "data_size": 63488 00:19:27.512 } 00:19:27.512 ] 00:19:27.512 }' 00:19:27.512 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.512 20:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:27.512 20:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.512 20:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:27.512 20:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:27.512 20:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.512 20:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.512 [2024-10-17 20:16:13.052061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:27.512 [2024-10-17 20:16:13.065614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:19:27.512 20:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.512 20:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:27.512 [2024-10-17 20:16:13.074272] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:28.470 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:28.470 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.470 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:28.470 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:28.470 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.470 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.470 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.470 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.470 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.470 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.739 "name": "raid_bdev1", 00:19:28.739 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:28.739 "strip_size_kb": 64, 00:19:28.739 "state": "online", 00:19:28.739 "raid_level": "raid5f", 00:19:28.739 "superblock": true, 00:19:28.739 "num_base_bdevs": 4, 00:19:28.739 "num_base_bdevs_discovered": 4, 00:19:28.739 "num_base_bdevs_operational": 4, 00:19:28.739 "process": { 00:19:28.739 "type": "rebuild", 00:19:28.739 "target": "spare", 00:19:28.739 "progress": { 00:19:28.739 "blocks": 17280, 00:19:28.739 "percent": 9 00:19:28.739 } 00:19:28.739 }, 00:19:28.739 "base_bdevs_list": [ 00:19:28.739 { 00:19:28.739 "name": "spare", 00:19:28.739 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:28.739 "is_configured": true, 00:19:28.739 "data_offset": 2048, 00:19:28.739 "data_size": 63488 00:19:28.739 }, 00:19:28.739 { 00:19:28.739 "name": "BaseBdev2", 00:19:28.739 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:28.739 "is_configured": true, 00:19:28.739 "data_offset": 2048, 00:19:28.739 "data_size": 63488 00:19:28.739 }, 00:19:28.739 { 00:19:28.739 "name": "BaseBdev3", 00:19:28.739 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:28.739 "is_configured": true, 00:19:28.739 "data_offset": 2048, 00:19:28.739 "data_size": 63488 00:19:28.739 }, 00:19:28.739 { 00:19:28.739 "name": "BaseBdev4", 00:19:28.739 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:28.739 "is_configured": true, 00:19:28.739 "data_offset": 2048, 00:19:28.739 "data_size": 63488 00:19:28.739 } 00:19:28.739 ] 00:19:28.739 }' 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:28.739 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=689 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.739 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:28.740 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:28.740 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.740 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.740 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.740 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.740 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.740 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.740 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.740 "name": "raid_bdev1", 00:19:28.740 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:28.740 "strip_size_kb": 64, 00:19:28.740 "state": "online", 00:19:28.740 "raid_level": "raid5f", 00:19:28.740 "superblock": true, 00:19:28.740 "num_base_bdevs": 4, 00:19:28.740 "num_base_bdevs_discovered": 4, 00:19:28.740 "num_base_bdevs_operational": 4, 00:19:28.740 "process": { 00:19:28.740 "type": "rebuild", 00:19:28.740 "target": "spare", 00:19:28.740 "progress": { 00:19:28.740 "blocks": 21120, 00:19:28.740 "percent": 11 00:19:28.740 } 00:19:28.740 }, 00:19:28.740 "base_bdevs_list": [ 00:19:28.740 { 00:19:28.740 "name": "spare", 00:19:28.740 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:28.740 "is_configured": true, 00:19:28.740 "data_offset": 2048, 00:19:28.740 "data_size": 63488 00:19:28.740 }, 00:19:28.740 { 00:19:28.740 "name": "BaseBdev2", 00:19:28.740 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:28.740 "is_configured": true, 00:19:28.740 "data_offset": 2048, 00:19:28.740 "data_size": 63488 00:19:28.740 }, 00:19:28.740 { 00:19:28.740 "name": "BaseBdev3", 00:19:28.740 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:28.740 "is_configured": true, 00:19:28.740 "data_offset": 2048, 00:19:28.740 "data_size": 63488 00:19:28.740 }, 00:19:28.740 { 00:19:28.740 "name": "BaseBdev4", 00:19:28.740 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:28.740 "is_configured": true, 00:19:28.740 "data_offset": 2048, 00:19:28.740 "data_size": 63488 00:19:28.740 } 00:19:28.740 ] 00:19:28.740 }' 00:19:28.740 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.740 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:28.740 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.740 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:28.740 20:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:30.118 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:30.118 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.118 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.118 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:30.118 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:30.118 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.118 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.118 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.118 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.118 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.118 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.118 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.118 "name": "raid_bdev1", 00:19:30.118 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:30.118 "strip_size_kb": 64, 00:19:30.118 "state": "online", 00:19:30.118 "raid_level": "raid5f", 00:19:30.118 "superblock": true, 00:19:30.118 "num_base_bdevs": 4, 00:19:30.118 "num_base_bdevs_discovered": 4, 00:19:30.118 "num_base_bdevs_operational": 4, 00:19:30.118 "process": { 00:19:30.118 "type": "rebuild", 00:19:30.118 "target": "spare", 00:19:30.118 "progress": { 00:19:30.118 "blocks": 42240, 00:19:30.118 "percent": 22 00:19:30.118 } 00:19:30.118 }, 00:19:30.118 "base_bdevs_list": [ 00:19:30.118 { 00:19:30.118 "name": "spare", 00:19:30.118 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:30.118 "is_configured": true, 00:19:30.118 "data_offset": 2048, 00:19:30.118 "data_size": 63488 00:19:30.118 }, 00:19:30.118 { 00:19:30.118 "name": "BaseBdev2", 00:19:30.119 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:30.119 "is_configured": true, 00:19:30.119 "data_offset": 2048, 00:19:30.119 "data_size": 63488 00:19:30.119 }, 00:19:30.119 { 00:19:30.119 "name": "BaseBdev3", 00:19:30.119 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:30.119 "is_configured": true, 00:19:30.119 "data_offset": 2048, 00:19:30.119 "data_size": 63488 00:19:30.119 }, 00:19:30.119 { 00:19:30.119 "name": "BaseBdev4", 00:19:30.119 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:30.119 "is_configured": true, 00:19:30.119 "data_offset": 2048, 00:19:30.119 "data_size": 63488 00:19:30.119 } 00:19:30.119 ] 00:19:30.119 }' 00:19:30.119 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.119 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.119 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.119 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.119 20:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:31.055 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:31.055 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.055 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.055 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:31.055 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:31.055 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.055 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.055 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.055 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.055 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.056 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.056 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.056 "name": "raid_bdev1", 00:19:31.056 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:31.056 "strip_size_kb": 64, 00:19:31.056 "state": "online", 00:19:31.056 "raid_level": "raid5f", 00:19:31.056 "superblock": true, 00:19:31.056 "num_base_bdevs": 4, 00:19:31.056 "num_base_bdevs_discovered": 4, 00:19:31.056 "num_base_bdevs_operational": 4, 00:19:31.056 "process": { 00:19:31.056 "type": "rebuild", 00:19:31.056 "target": "spare", 00:19:31.056 "progress": { 00:19:31.056 "blocks": 65280, 00:19:31.056 "percent": 34 00:19:31.056 } 00:19:31.056 }, 00:19:31.056 "base_bdevs_list": [ 00:19:31.056 { 00:19:31.056 "name": "spare", 00:19:31.056 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:31.056 "is_configured": true, 00:19:31.056 "data_offset": 2048, 00:19:31.056 "data_size": 63488 00:19:31.056 }, 00:19:31.056 { 00:19:31.056 "name": "BaseBdev2", 00:19:31.056 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:31.056 "is_configured": true, 00:19:31.056 "data_offset": 2048, 00:19:31.056 "data_size": 63488 00:19:31.056 }, 00:19:31.056 { 00:19:31.056 "name": "BaseBdev3", 00:19:31.056 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:31.056 "is_configured": true, 00:19:31.056 "data_offset": 2048, 00:19:31.056 "data_size": 63488 00:19:31.056 }, 00:19:31.056 { 00:19:31.056 "name": "BaseBdev4", 00:19:31.056 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:31.056 "is_configured": true, 00:19:31.056 "data_offset": 2048, 00:19:31.056 "data_size": 63488 00:19:31.056 } 00:19:31.056 ] 00:19:31.056 }' 00:19:31.056 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.056 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:31.056 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.056 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.056 20:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.435 "name": "raid_bdev1", 00:19:32.435 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:32.435 "strip_size_kb": 64, 00:19:32.435 "state": "online", 00:19:32.435 "raid_level": "raid5f", 00:19:32.435 "superblock": true, 00:19:32.435 "num_base_bdevs": 4, 00:19:32.435 "num_base_bdevs_discovered": 4, 00:19:32.435 "num_base_bdevs_operational": 4, 00:19:32.435 "process": { 00:19:32.435 "type": "rebuild", 00:19:32.435 "target": "spare", 00:19:32.435 "progress": { 00:19:32.435 "blocks": 86400, 00:19:32.435 "percent": 45 00:19:32.435 } 00:19:32.435 }, 00:19:32.435 "base_bdevs_list": [ 00:19:32.435 { 00:19:32.435 "name": "spare", 00:19:32.435 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:32.435 "is_configured": true, 00:19:32.435 "data_offset": 2048, 00:19:32.435 "data_size": 63488 00:19:32.435 }, 00:19:32.435 { 00:19:32.435 "name": "BaseBdev2", 00:19:32.435 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:32.435 "is_configured": true, 00:19:32.435 "data_offset": 2048, 00:19:32.435 "data_size": 63488 00:19:32.435 }, 00:19:32.435 { 00:19:32.435 "name": "BaseBdev3", 00:19:32.435 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:32.435 "is_configured": true, 00:19:32.435 "data_offset": 2048, 00:19:32.435 "data_size": 63488 00:19:32.435 }, 00:19:32.435 { 00:19:32.435 "name": "BaseBdev4", 00:19:32.435 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:32.435 "is_configured": true, 00:19:32.435 "data_offset": 2048, 00:19:32.435 "data_size": 63488 00:19:32.435 } 00:19:32.435 ] 00:19:32.435 }' 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:32.435 20:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:33.394 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:33.394 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.394 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.394 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:33.394 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:33.394 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.394 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.394 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.394 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.394 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.394 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.394 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.394 "name": "raid_bdev1", 00:19:33.394 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:33.394 "strip_size_kb": 64, 00:19:33.394 "state": "online", 00:19:33.394 "raid_level": "raid5f", 00:19:33.394 "superblock": true, 00:19:33.394 "num_base_bdevs": 4, 00:19:33.394 "num_base_bdevs_discovered": 4, 00:19:33.395 "num_base_bdevs_operational": 4, 00:19:33.395 "process": { 00:19:33.395 "type": "rebuild", 00:19:33.395 "target": "spare", 00:19:33.395 "progress": { 00:19:33.395 "blocks": 109440, 00:19:33.395 "percent": 57 00:19:33.395 } 00:19:33.395 }, 00:19:33.395 "base_bdevs_list": [ 00:19:33.395 { 00:19:33.395 "name": "spare", 00:19:33.395 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:33.395 "is_configured": true, 00:19:33.395 "data_offset": 2048, 00:19:33.395 "data_size": 63488 00:19:33.395 }, 00:19:33.395 { 00:19:33.395 "name": "BaseBdev2", 00:19:33.395 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:33.395 "is_configured": true, 00:19:33.395 "data_offset": 2048, 00:19:33.395 "data_size": 63488 00:19:33.395 }, 00:19:33.395 { 00:19:33.395 "name": "BaseBdev3", 00:19:33.395 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:33.395 "is_configured": true, 00:19:33.395 "data_offset": 2048, 00:19:33.395 "data_size": 63488 00:19:33.395 }, 00:19:33.395 { 00:19:33.395 "name": "BaseBdev4", 00:19:33.395 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:33.395 "is_configured": true, 00:19:33.395 "data_offset": 2048, 00:19:33.395 "data_size": 63488 00:19:33.395 } 00:19:33.395 ] 00:19:33.395 }' 00:19:33.395 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.395 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:33.395 20:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.395 20:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:33.395 20:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:34.771 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:34.771 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.772 "name": "raid_bdev1", 00:19:34.772 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:34.772 "strip_size_kb": 64, 00:19:34.772 "state": "online", 00:19:34.772 "raid_level": "raid5f", 00:19:34.772 "superblock": true, 00:19:34.772 "num_base_bdevs": 4, 00:19:34.772 "num_base_bdevs_discovered": 4, 00:19:34.772 "num_base_bdevs_operational": 4, 00:19:34.772 "process": { 00:19:34.772 "type": "rebuild", 00:19:34.772 "target": "spare", 00:19:34.772 "progress": { 00:19:34.772 "blocks": 130560, 00:19:34.772 "percent": 68 00:19:34.772 } 00:19:34.772 }, 00:19:34.772 "base_bdevs_list": [ 00:19:34.772 { 00:19:34.772 "name": "spare", 00:19:34.772 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:34.772 "is_configured": true, 00:19:34.772 "data_offset": 2048, 00:19:34.772 "data_size": 63488 00:19:34.772 }, 00:19:34.772 { 00:19:34.772 "name": "BaseBdev2", 00:19:34.772 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:34.772 "is_configured": true, 00:19:34.772 "data_offset": 2048, 00:19:34.772 "data_size": 63488 00:19:34.772 }, 00:19:34.772 { 00:19:34.772 "name": "BaseBdev3", 00:19:34.772 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:34.772 "is_configured": true, 00:19:34.772 "data_offset": 2048, 00:19:34.772 "data_size": 63488 00:19:34.772 }, 00:19:34.772 { 00:19:34.772 "name": "BaseBdev4", 00:19:34.772 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:34.772 "is_configured": true, 00:19:34.772 "data_offset": 2048, 00:19:34.772 "data_size": 63488 00:19:34.772 } 00:19:34.772 ] 00:19:34.772 }' 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.772 20:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.710 "name": "raid_bdev1", 00:19:35.710 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:35.710 "strip_size_kb": 64, 00:19:35.710 "state": "online", 00:19:35.710 "raid_level": "raid5f", 00:19:35.710 "superblock": true, 00:19:35.710 "num_base_bdevs": 4, 00:19:35.710 "num_base_bdevs_discovered": 4, 00:19:35.710 "num_base_bdevs_operational": 4, 00:19:35.710 "process": { 00:19:35.710 "type": "rebuild", 00:19:35.710 "target": "spare", 00:19:35.710 "progress": { 00:19:35.710 "blocks": 153600, 00:19:35.710 "percent": 80 00:19:35.710 } 00:19:35.710 }, 00:19:35.710 "base_bdevs_list": [ 00:19:35.710 { 00:19:35.710 "name": "spare", 00:19:35.710 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:35.710 "is_configured": true, 00:19:35.710 "data_offset": 2048, 00:19:35.710 "data_size": 63488 00:19:35.710 }, 00:19:35.710 { 00:19:35.710 "name": "BaseBdev2", 00:19:35.710 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:35.710 "is_configured": true, 00:19:35.710 "data_offset": 2048, 00:19:35.710 "data_size": 63488 00:19:35.710 }, 00:19:35.710 { 00:19:35.710 "name": "BaseBdev3", 00:19:35.710 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:35.710 "is_configured": true, 00:19:35.710 "data_offset": 2048, 00:19:35.710 "data_size": 63488 00:19:35.710 }, 00:19:35.710 { 00:19:35.710 "name": "BaseBdev4", 00:19:35.710 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:35.710 "is_configured": true, 00:19:35.710 "data_offset": 2048, 00:19:35.710 "data_size": 63488 00:19:35.710 } 00:19:35.710 ] 00:19:35.710 }' 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.710 20:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.087 "name": "raid_bdev1", 00:19:37.087 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:37.087 "strip_size_kb": 64, 00:19:37.087 "state": "online", 00:19:37.087 "raid_level": "raid5f", 00:19:37.087 "superblock": true, 00:19:37.087 "num_base_bdevs": 4, 00:19:37.087 "num_base_bdevs_discovered": 4, 00:19:37.087 "num_base_bdevs_operational": 4, 00:19:37.087 "process": { 00:19:37.087 "type": "rebuild", 00:19:37.087 "target": "spare", 00:19:37.087 "progress": { 00:19:37.087 "blocks": 176640, 00:19:37.087 "percent": 92 00:19:37.087 } 00:19:37.087 }, 00:19:37.087 "base_bdevs_list": [ 00:19:37.087 { 00:19:37.087 "name": "spare", 00:19:37.087 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:37.087 "is_configured": true, 00:19:37.087 "data_offset": 2048, 00:19:37.087 "data_size": 63488 00:19:37.087 }, 00:19:37.087 { 00:19:37.087 "name": "BaseBdev2", 00:19:37.087 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:37.087 "is_configured": true, 00:19:37.087 "data_offset": 2048, 00:19:37.087 "data_size": 63488 00:19:37.087 }, 00:19:37.087 { 00:19:37.087 "name": "BaseBdev3", 00:19:37.087 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:37.087 "is_configured": true, 00:19:37.087 "data_offset": 2048, 00:19:37.087 "data_size": 63488 00:19:37.087 }, 00:19:37.087 { 00:19:37.087 "name": "BaseBdev4", 00:19:37.087 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:37.087 "is_configured": true, 00:19:37.087 "data_offset": 2048, 00:19:37.087 "data_size": 63488 00:19:37.087 } 00:19:37.087 ] 00:19:37.087 }' 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.087 20:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:37.684 [2024-10-17 20:16:23.172139] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:37.684 [2024-10-17 20:16:23.172282] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:37.684 [2024-10-17 20:16:23.172500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.943 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:37.943 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.943 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.943 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.943 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.943 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.943 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.943 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.943 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.943 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.943 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.943 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.943 "name": "raid_bdev1", 00:19:37.943 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:37.943 "strip_size_kb": 64, 00:19:37.943 "state": "online", 00:19:37.943 "raid_level": "raid5f", 00:19:37.943 "superblock": true, 00:19:37.943 "num_base_bdevs": 4, 00:19:37.943 "num_base_bdevs_discovered": 4, 00:19:37.943 "num_base_bdevs_operational": 4, 00:19:37.943 "base_bdevs_list": [ 00:19:37.943 { 00:19:37.943 "name": "spare", 00:19:37.943 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:37.943 "is_configured": true, 00:19:37.943 "data_offset": 2048, 00:19:37.943 "data_size": 63488 00:19:37.943 }, 00:19:37.943 { 00:19:37.943 "name": "BaseBdev2", 00:19:37.943 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:37.943 "is_configured": true, 00:19:37.943 "data_offset": 2048, 00:19:37.943 "data_size": 63488 00:19:37.943 }, 00:19:37.943 { 00:19:37.943 "name": "BaseBdev3", 00:19:37.943 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:37.943 "is_configured": true, 00:19:37.943 "data_offset": 2048, 00:19:37.943 "data_size": 63488 00:19:37.943 }, 00:19:37.943 { 00:19:37.943 "name": "BaseBdev4", 00:19:37.943 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:37.943 "is_configured": true, 00:19:37.943 "data_offset": 2048, 00:19:37.943 "data_size": 63488 00:19:37.943 } 00:19:37.943 ] 00:19:37.943 }' 00:19:37.943 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.202 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.202 "name": "raid_bdev1", 00:19:38.202 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:38.202 "strip_size_kb": 64, 00:19:38.202 "state": "online", 00:19:38.202 "raid_level": "raid5f", 00:19:38.202 "superblock": true, 00:19:38.202 "num_base_bdevs": 4, 00:19:38.202 "num_base_bdevs_discovered": 4, 00:19:38.202 "num_base_bdevs_operational": 4, 00:19:38.202 "base_bdevs_list": [ 00:19:38.202 { 00:19:38.202 "name": "spare", 00:19:38.202 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:38.202 "is_configured": true, 00:19:38.202 "data_offset": 2048, 00:19:38.202 "data_size": 63488 00:19:38.202 }, 00:19:38.202 { 00:19:38.202 "name": "BaseBdev2", 00:19:38.202 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:38.202 "is_configured": true, 00:19:38.202 "data_offset": 2048, 00:19:38.202 "data_size": 63488 00:19:38.202 }, 00:19:38.202 { 00:19:38.202 "name": "BaseBdev3", 00:19:38.202 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:38.202 "is_configured": true, 00:19:38.202 "data_offset": 2048, 00:19:38.202 "data_size": 63488 00:19:38.202 }, 00:19:38.202 { 00:19:38.202 "name": "BaseBdev4", 00:19:38.202 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:38.202 "is_configured": true, 00:19:38.202 "data_offset": 2048, 00:19:38.202 "data_size": 63488 00:19:38.202 } 00:19:38.202 ] 00:19:38.202 }' 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.203 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.461 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.461 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.462 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.462 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.462 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.462 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.462 "name": "raid_bdev1", 00:19:38.462 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:38.462 "strip_size_kb": 64, 00:19:38.462 "state": "online", 00:19:38.462 "raid_level": "raid5f", 00:19:38.462 "superblock": true, 00:19:38.462 "num_base_bdevs": 4, 00:19:38.462 "num_base_bdevs_discovered": 4, 00:19:38.462 "num_base_bdevs_operational": 4, 00:19:38.462 "base_bdevs_list": [ 00:19:38.462 { 00:19:38.462 "name": "spare", 00:19:38.462 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:38.462 "is_configured": true, 00:19:38.462 "data_offset": 2048, 00:19:38.462 "data_size": 63488 00:19:38.462 }, 00:19:38.462 { 00:19:38.462 "name": "BaseBdev2", 00:19:38.462 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:38.462 "is_configured": true, 00:19:38.462 "data_offset": 2048, 00:19:38.462 "data_size": 63488 00:19:38.462 }, 00:19:38.462 { 00:19:38.462 "name": "BaseBdev3", 00:19:38.462 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:38.462 "is_configured": true, 00:19:38.462 "data_offset": 2048, 00:19:38.462 "data_size": 63488 00:19:38.462 }, 00:19:38.462 { 00:19:38.462 "name": "BaseBdev4", 00:19:38.462 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:38.462 "is_configured": true, 00:19:38.462 "data_offset": 2048, 00:19:38.462 "data_size": 63488 00:19:38.462 } 00:19:38.462 ] 00:19:38.462 }' 00:19:38.462 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.462 20:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.720 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:38.720 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.720 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.720 [2024-10-17 20:16:24.358117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.720 [2024-10-17 20:16:24.358404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.720 [2024-10-17 20:16:24.358528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.720 [2024-10-17 20:16:24.358655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.720 [2024-10-17 20:16:24.358682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:38.720 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.720 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.720 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:38.720 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.720 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.979 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.979 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:38.979 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:38.979 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:38.979 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:38.979 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:38.979 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:38.979 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:38.980 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:38.980 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:38.980 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:38.980 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:38.980 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:38.980 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:39.238 /dev/nbd0 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.238 1+0 records in 00:19:39.238 1+0 records out 00:19:39.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407071 s, 10.1 MB/s 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:39.238 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:39.239 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:39.239 20:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:39.497 /dev/nbd1 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.497 1+0 records in 00:19:39.497 1+0 records out 00:19:39.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433238 s, 9.5 MB/s 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:39.497 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:39.755 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:39.755 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:39.755 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:39.755 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:39.755 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:39.755 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:39.755 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:40.013 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:40.013 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:40.013 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:40.013 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:40.013 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:40.013 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:40.013 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:40.013 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.013 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:40.013 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.582 [2024-10-17 20:16:25.981702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:40.582 [2024-10-17 20:16:25.981962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.582 [2024-10-17 20:16:25.982051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:40.582 [2024-10-17 20:16:25.982071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.582 [2024-10-17 20:16:25.985036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.582 [2024-10-17 20:16:25.985091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:40.582 [2024-10-17 20:16:25.985201] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:40.582 [2024-10-17 20:16:25.985269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:40.582 [2024-10-17 20:16:25.985461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:40.582 [2024-10-17 20:16:25.985591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:40.582 [2024-10-17 20:16:25.985699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:40.582 spare 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.582 20:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.582 [2024-10-17 20:16:26.085878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:40.582 [2024-10-17 20:16:26.086157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:40.582 [2024-10-17 20:16:26.086575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:19:40.582 [2024-10-17 20:16:26.093032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:40.582 [2024-10-17 20:16:26.093064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:40.582 [2024-10-17 20:16:26.093364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.582 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.582 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:40.582 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.582 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.582 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.583 "name": "raid_bdev1", 00:19:40.583 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:40.583 "strip_size_kb": 64, 00:19:40.583 "state": "online", 00:19:40.583 "raid_level": "raid5f", 00:19:40.583 "superblock": true, 00:19:40.583 "num_base_bdevs": 4, 00:19:40.583 "num_base_bdevs_discovered": 4, 00:19:40.583 "num_base_bdevs_operational": 4, 00:19:40.583 "base_bdevs_list": [ 00:19:40.583 { 00:19:40.583 "name": "spare", 00:19:40.583 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:40.583 "is_configured": true, 00:19:40.583 "data_offset": 2048, 00:19:40.583 "data_size": 63488 00:19:40.583 }, 00:19:40.583 { 00:19:40.583 "name": "BaseBdev2", 00:19:40.583 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:40.583 "is_configured": true, 00:19:40.583 "data_offset": 2048, 00:19:40.583 "data_size": 63488 00:19:40.583 }, 00:19:40.583 { 00:19:40.583 "name": "BaseBdev3", 00:19:40.583 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:40.583 "is_configured": true, 00:19:40.583 "data_offset": 2048, 00:19:40.583 "data_size": 63488 00:19:40.583 }, 00:19:40.583 { 00:19:40.583 "name": "BaseBdev4", 00:19:40.583 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:40.583 "is_configured": true, 00:19:40.583 "data_offset": 2048, 00:19:40.583 "data_size": 63488 00:19:40.583 } 00:19:40.583 ] 00:19:40.583 }' 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.583 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.150 "name": "raid_bdev1", 00:19:41.150 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:41.150 "strip_size_kb": 64, 00:19:41.150 "state": "online", 00:19:41.150 "raid_level": "raid5f", 00:19:41.150 "superblock": true, 00:19:41.150 "num_base_bdevs": 4, 00:19:41.150 "num_base_bdevs_discovered": 4, 00:19:41.150 "num_base_bdevs_operational": 4, 00:19:41.150 "base_bdevs_list": [ 00:19:41.150 { 00:19:41.150 "name": "spare", 00:19:41.150 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:41.150 "is_configured": true, 00:19:41.150 "data_offset": 2048, 00:19:41.150 "data_size": 63488 00:19:41.150 }, 00:19:41.150 { 00:19:41.150 "name": "BaseBdev2", 00:19:41.150 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:41.150 "is_configured": true, 00:19:41.150 "data_offset": 2048, 00:19:41.150 "data_size": 63488 00:19:41.150 }, 00:19:41.150 { 00:19:41.150 "name": "BaseBdev3", 00:19:41.150 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:41.150 "is_configured": true, 00:19:41.150 "data_offset": 2048, 00:19:41.150 "data_size": 63488 00:19:41.150 }, 00:19:41.150 { 00:19:41.150 "name": "BaseBdev4", 00:19:41.150 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:41.150 "is_configured": true, 00:19:41.150 "data_offset": 2048, 00:19:41.150 "data_size": 63488 00:19:41.150 } 00:19:41.150 ] 00:19:41.150 }' 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.150 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.409 [2024-10-17 20:16:26.856941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.409 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.409 "name": "raid_bdev1", 00:19:41.409 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:41.409 "strip_size_kb": 64, 00:19:41.409 "state": "online", 00:19:41.409 "raid_level": "raid5f", 00:19:41.409 "superblock": true, 00:19:41.409 "num_base_bdevs": 4, 00:19:41.409 "num_base_bdevs_discovered": 3, 00:19:41.409 "num_base_bdevs_operational": 3, 00:19:41.409 "base_bdevs_list": [ 00:19:41.409 { 00:19:41.409 "name": null, 00:19:41.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.409 "is_configured": false, 00:19:41.409 "data_offset": 0, 00:19:41.409 "data_size": 63488 00:19:41.409 }, 00:19:41.409 { 00:19:41.409 "name": "BaseBdev2", 00:19:41.409 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:41.409 "is_configured": true, 00:19:41.409 "data_offset": 2048, 00:19:41.409 "data_size": 63488 00:19:41.409 }, 00:19:41.410 { 00:19:41.410 "name": "BaseBdev3", 00:19:41.410 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:41.410 "is_configured": true, 00:19:41.410 "data_offset": 2048, 00:19:41.410 "data_size": 63488 00:19:41.410 }, 00:19:41.410 { 00:19:41.410 "name": "BaseBdev4", 00:19:41.410 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:41.410 "is_configured": true, 00:19:41.410 "data_offset": 2048, 00:19:41.410 "data_size": 63488 00:19:41.410 } 00:19:41.410 ] 00:19:41.410 }' 00:19:41.410 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.410 20:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.990 20:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:41.990 20:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.990 20:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.990 [2024-10-17 20:16:27.385188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:41.990 [2024-10-17 20:16:27.385475] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:41.990 [2024-10-17 20:16:27.385501] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:41.990 [2024-10-17 20:16:27.385546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:41.990 [2024-10-17 20:16:27.398148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:19:41.990 20:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.990 20:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:41.990 [2024-10-17 20:16:27.406079] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:42.934 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.934 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.934 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:42.934 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:42.934 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.934 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.934 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.934 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.934 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.934 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.934 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.934 "name": "raid_bdev1", 00:19:42.934 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:42.934 "strip_size_kb": 64, 00:19:42.934 "state": "online", 00:19:42.934 "raid_level": "raid5f", 00:19:42.934 "superblock": true, 00:19:42.934 "num_base_bdevs": 4, 00:19:42.934 "num_base_bdevs_discovered": 4, 00:19:42.934 "num_base_bdevs_operational": 4, 00:19:42.934 "process": { 00:19:42.934 "type": "rebuild", 00:19:42.934 "target": "spare", 00:19:42.934 "progress": { 00:19:42.934 "blocks": 17280, 00:19:42.934 "percent": 9 00:19:42.934 } 00:19:42.934 }, 00:19:42.934 "base_bdevs_list": [ 00:19:42.934 { 00:19:42.934 "name": "spare", 00:19:42.934 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:42.934 "is_configured": true, 00:19:42.934 "data_offset": 2048, 00:19:42.934 "data_size": 63488 00:19:42.934 }, 00:19:42.934 { 00:19:42.935 "name": "BaseBdev2", 00:19:42.935 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:42.935 "is_configured": true, 00:19:42.935 "data_offset": 2048, 00:19:42.935 "data_size": 63488 00:19:42.935 }, 00:19:42.935 { 00:19:42.935 "name": "BaseBdev3", 00:19:42.935 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:42.935 "is_configured": true, 00:19:42.935 "data_offset": 2048, 00:19:42.935 "data_size": 63488 00:19:42.935 }, 00:19:42.935 { 00:19:42.935 "name": "BaseBdev4", 00:19:42.935 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:42.935 "is_configured": true, 00:19:42.935 "data_offset": 2048, 00:19:42.935 "data_size": 63488 00:19:42.935 } 00:19:42.935 ] 00:19:42.935 }' 00:19:42.935 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.935 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.935 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.935 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.935 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:42.935 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.935 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.935 [2024-10-17 20:16:28.567206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.194 [2024-10-17 20:16:28.617074] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:43.194 [2024-10-17 20:16:28.617163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.194 [2024-10-17 20:16:28.617201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.194 [2024-10-17 20:16:28.617220] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.194 "name": "raid_bdev1", 00:19:43.194 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:43.194 "strip_size_kb": 64, 00:19:43.194 "state": "online", 00:19:43.194 "raid_level": "raid5f", 00:19:43.194 "superblock": true, 00:19:43.194 "num_base_bdevs": 4, 00:19:43.194 "num_base_bdevs_discovered": 3, 00:19:43.194 "num_base_bdevs_operational": 3, 00:19:43.194 "base_bdevs_list": [ 00:19:43.194 { 00:19:43.194 "name": null, 00:19:43.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.194 "is_configured": false, 00:19:43.194 "data_offset": 0, 00:19:43.194 "data_size": 63488 00:19:43.194 }, 00:19:43.194 { 00:19:43.194 "name": "BaseBdev2", 00:19:43.194 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:43.194 "is_configured": true, 00:19:43.194 "data_offset": 2048, 00:19:43.194 "data_size": 63488 00:19:43.194 }, 00:19:43.194 { 00:19:43.194 "name": "BaseBdev3", 00:19:43.194 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:43.194 "is_configured": true, 00:19:43.194 "data_offset": 2048, 00:19:43.194 "data_size": 63488 00:19:43.194 }, 00:19:43.194 { 00:19:43.194 "name": "BaseBdev4", 00:19:43.194 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:43.194 "is_configured": true, 00:19:43.194 "data_offset": 2048, 00:19:43.194 "data_size": 63488 00:19:43.194 } 00:19:43.194 ] 00:19:43.194 }' 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.194 20:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.761 20:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:43.761 20:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.761 20:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.761 [2024-10-17 20:16:29.166640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:43.761 [2024-10-17 20:16:29.166727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.761 [2024-10-17 20:16:29.166766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:43.761 [2024-10-17 20:16:29.166785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.761 [2024-10-17 20:16:29.167430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.761 [2024-10-17 20:16:29.167461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:43.761 [2024-10-17 20:16:29.167571] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:43.761 [2024-10-17 20:16:29.167594] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:43.761 [2024-10-17 20:16:29.167606] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:43.761 [2024-10-17 20:16:29.167640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:43.761 [2024-10-17 20:16:29.179889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:19:43.761 spare 00:19:43.761 20:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.761 20:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:43.761 [2024-10-17 20:16:29.187719] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.695 "name": "raid_bdev1", 00:19:44.695 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:44.695 "strip_size_kb": 64, 00:19:44.695 "state": "online", 00:19:44.695 "raid_level": "raid5f", 00:19:44.695 "superblock": true, 00:19:44.695 "num_base_bdevs": 4, 00:19:44.695 "num_base_bdevs_discovered": 4, 00:19:44.695 "num_base_bdevs_operational": 4, 00:19:44.695 "process": { 00:19:44.695 "type": "rebuild", 00:19:44.695 "target": "spare", 00:19:44.695 "progress": { 00:19:44.695 "blocks": 17280, 00:19:44.695 "percent": 9 00:19:44.695 } 00:19:44.695 }, 00:19:44.695 "base_bdevs_list": [ 00:19:44.695 { 00:19:44.695 "name": "spare", 00:19:44.695 "uuid": "ad6b577d-65b3-5b84-8fcf-3098c71fd207", 00:19:44.695 "is_configured": true, 00:19:44.695 "data_offset": 2048, 00:19:44.695 "data_size": 63488 00:19:44.695 }, 00:19:44.695 { 00:19:44.695 "name": "BaseBdev2", 00:19:44.695 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:44.695 "is_configured": true, 00:19:44.695 "data_offset": 2048, 00:19:44.695 "data_size": 63488 00:19:44.695 }, 00:19:44.695 { 00:19:44.695 "name": "BaseBdev3", 00:19:44.695 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:44.695 "is_configured": true, 00:19:44.695 "data_offset": 2048, 00:19:44.695 "data_size": 63488 00:19:44.695 }, 00:19:44.695 { 00:19:44.695 "name": "BaseBdev4", 00:19:44.695 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:44.695 "is_configured": true, 00:19:44.695 "data_offset": 2048, 00:19:44.695 "data_size": 63488 00:19:44.695 } 00:19:44.695 ] 00:19:44.695 }' 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:44.695 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.952 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:44.952 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:44.952 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.952 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.952 [2024-10-17 20:16:30.353219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:44.952 [2024-10-17 20:16:30.399784] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:44.953 [2024-10-17 20:16:30.399872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.953 [2024-10-17 20:16:30.399910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:44.953 [2024-10-17 20:16:30.399926] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.953 "name": "raid_bdev1", 00:19:44.953 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:44.953 "strip_size_kb": 64, 00:19:44.953 "state": "online", 00:19:44.953 "raid_level": "raid5f", 00:19:44.953 "superblock": true, 00:19:44.953 "num_base_bdevs": 4, 00:19:44.953 "num_base_bdevs_discovered": 3, 00:19:44.953 "num_base_bdevs_operational": 3, 00:19:44.953 "base_bdevs_list": [ 00:19:44.953 { 00:19:44.953 "name": null, 00:19:44.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.953 "is_configured": false, 00:19:44.953 "data_offset": 0, 00:19:44.953 "data_size": 63488 00:19:44.953 }, 00:19:44.953 { 00:19:44.953 "name": "BaseBdev2", 00:19:44.953 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:44.953 "is_configured": true, 00:19:44.953 "data_offset": 2048, 00:19:44.953 "data_size": 63488 00:19:44.953 }, 00:19:44.953 { 00:19:44.953 "name": "BaseBdev3", 00:19:44.953 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:44.953 "is_configured": true, 00:19:44.953 "data_offset": 2048, 00:19:44.953 "data_size": 63488 00:19:44.953 }, 00:19:44.953 { 00:19:44.953 "name": "BaseBdev4", 00:19:44.953 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:44.953 "is_configured": true, 00:19:44.953 "data_offset": 2048, 00:19:44.953 "data_size": 63488 00:19:44.953 } 00:19:44.953 ] 00:19:44.953 }' 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.953 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.518 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:45.518 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.518 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:45.518 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:45.518 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.518 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.518 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.518 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.518 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.518 20:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.518 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.518 "name": "raid_bdev1", 00:19:45.518 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:45.518 "strip_size_kb": 64, 00:19:45.518 "state": "online", 00:19:45.518 "raid_level": "raid5f", 00:19:45.518 "superblock": true, 00:19:45.518 "num_base_bdevs": 4, 00:19:45.518 "num_base_bdevs_discovered": 3, 00:19:45.518 "num_base_bdevs_operational": 3, 00:19:45.518 "base_bdevs_list": [ 00:19:45.518 { 00:19:45.518 "name": null, 00:19:45.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.518 "is_configured": false, 00:19:45.518 "data_offset": 0, 00:19:45.518 "data_size": 63488 00:19:45.518 }, 00:19:45.518 { 00:19:45.518 "name": "BaseBdev2", 00:19:45.518 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:45.519 "is_configured": true, 00:19:45.519 "data_offset": 2048, 00:19:45.519 "data_size": 63488 00:19:45.519 }, 00:19:45.519 { 00:19:45.519 "name": "BaseBdev3", 00:19:45.519 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:45.519 "is_configured": true, 00:19:45.519 "data_offset": 2048, 00:19:45.519 "data_size": 63488 00:19:45.519 }, 00:19:45.519 { 00:19:45.519 "name": "BaseBdev4", 00:19:45.519 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:45.519 "is_configured": true, 00:19:45.519 "data_offset": 2048, 00:19:45.519 "data_size": 63488 00:19:45.519 } 00:19:45.519 ] 00:19:45.519 }' 00:19:45.519 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.519 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:45.519 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.519 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:45.519 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:45.519 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.519 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.519 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.519 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:45.519 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.519 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.519 [2024-10-17 20:16:31.138312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:45.519 [2024-10-17 20:16:31.138393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.519 [2024-10-17 20:16:31.138423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:45.519 [2024-10-17 20:16:31.138437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.519 [2024-10-17 20:16:31.138972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.519 [2024-10-17 20:16:31.139054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:45.519 [2024-10-17 20:16:31.139167] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:45.519 [2024-10-17 20:16:31.139188] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:45.519 [2024-10-17 20:16:31.139204] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:45.519 [2024-10-17 20:16:31.139216] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:45.519 BaseBdev1 00:19:45.519 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.519 20:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.897 "name": "raid_bdev1", 00:19:46.897 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:46.897 "strip_size_kb": 64, 00:19:46.897 "state": "online", 00:19:46.897 "raid_level": "raid5f", 00:19:46.897 "superblock": true, 00:19:46.897 "num_base_bdevs": 4, 00:19:46.897 "num_base_bdevs_discovered": 3, 00:19:46.897 "num_base_bdevs_operational": 3, 00:19:46.897 "base_bdevs_list": [ 00:19:46.897 { 00:19:46.897 "name": null, 00:19:46.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.897 "is_configured": false, 00:19:46.897 "data_offset": 0, 00:19:46.897 "data_size": 63488 00:19:46.897 }, 00:19:46.897 { 00:19:46.897 "name": "BaseBdev2", 00:19:46.897 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:46.897 "is_configured": true, 00:19:46.897 "data_offset": 2048, 00:19:46.897 "data_size": 63488 00:19:46.897 }, 00:19:46.897 { 00:19:46.897 "name": "BaseBdev3", 00:19:46.897 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:46.897 "is_configured": true, 00:19:46.897 "data_offset": 2048, 00:19:46.897 "data_size": 63488 00:19:46.897 }, 00:19:46.897 { 00:19:46.897 "name": "BaseBdev4", 00:19:46.897 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:46.897 "is_configured": true, 00:19:46.897 "data_offset": 2048, 00:19:46.897 "data_size": 63488 00:19:46.897 } 00:19:46.897 ] 00:19:46.897 }' 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.897 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.156 "name": "raid_bdev1", 00:19:47.156 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:47.156 "strip_size_kb": 64, 00:19:47.156 "state": "online", 00:19:47.156 "raid_level": "raid5f", 00:19:47.156 "superblock": true, 00:19:47.156 "num_base_bdevs": 4, 00:19:47.156 "num_base_bdevs_discovered": 3, 00:19:47.156 "num_base_bdevs_operational": 3, 00:19:47.156 "base_bdevs_list": [ 00:19:47.156 { 00:19:47.156 "name": null, 00:19:47.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.156 "is_configured": false, 00:19:47.156 "data_offset": 0, 00:19:47.156 "data_size": 63488 00:19:47.156 }, 00:19:47.156 { 00:19:47.156 "name": "BaseBdev2", 00:19:47.156 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:47.156 "is_configured": true, 00:19:47.156 "data_offset": 2048, 00:19:47.156 "data_size": 63488 00:19:47.156 }, 00:19:47.156 { 00:19:47.156 "name": "BaseBdev3", 00:19:47.156 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:47.156 "is_configured": true, 00:19:47.156 "data_offset": 2048, 00:19:47.156 "data_size": 63488 00:19:47.156 }, 00:19:47.156 { 00:19:47.156 "name": "BaseBdev4", 00:19:47.156 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:47.156 "is_configured": true, 00:19:47.156 "data_offset": 2048, 00:19:47.156 "data_size": 63488 00:19:47.156 } 00:19:47.156 ] 00:19:47.156 }' 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.156 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.415 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:47.415 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:47.415 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.416 [2024-10-17 20:16:32.846763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.416 [2024-10-17 20:16:32.848170] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:47.416 [2024-10-17 20:16:32.848214] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:47.416 request: 00:19:47.416 { 00:19:47.416 "base_bdev": "BaseBdev1", 00:19:47.416 "raid_bdev": "raid_bdev1", 00:19:47.416 "method": "bdev_raid_add_base_bdev", 00:19:47.416 "req_id": 1 00:19:47.416 } 00:19:47.416 Got JSON-RPC error response 00:19:47.416 response: 00:19:47.416 { 00:19:47.416 "code": -22, 00:19:47.416 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:47.416 } 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:47.416 20:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:48.352 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:48.352 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.352 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.352 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:48.352 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:48.352 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:48.352 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.352 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.352 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.352 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.353 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.353 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.353 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.353 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.353 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.353 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.353 "name": "raid_bdev1", 00:19:48.353 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:48.353 "strip_size_kb": 64, 00:19:48.353 "state": "online", 00:19:48.353 "raid_level": "raid5f", 00:19:48.353 "superblock": true, 00:19:48.353 "num_base_bdevs": 4, 00:19:48.353 "num_base_bdevs_discovered": 3, 00:19:48.353 "num_base_bdevs_operational": 3, 00:19:48.353 "base_bdevs_list": [ 00:19:48.353 { 00:19:48.353 "name": null, 00:19:48.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.353 "is_configured": false, 00:19:48.353 "data_offset": 0, 00:19:48.353 "data_size": 63488 00:19:48.353 }, 00:19:48.353 { 00:19:48.353 "name": "BaseBdev2", 00:19:48.353 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:48.353 "is_configured": true, 00:19:48.353 "data_offset": 2048, 00:19:48.353 "data_size": 63488 00:19:48.353 }, 00:19:48.353 { 00:19:48.353 "name": "BaseBdev3", 00:19:48.353 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:48.353 "is_configured": true, 00:19:48.353 "data_offset": 2048, 00:19:48.353 "data_size": 63488 00:19:48.353 }, 00:19:48.353 { 00:19:48.353 "name": "BaseBdev4", 00:19:48.353 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:48.353 "is_configured": true, 00:19:48.353 "data_offset": 2048, 00:19:48.353 "data_size": 63488 00:19:48.353 } 00:19:48.353 ] 00:19:48.353 }' 00:19:48.353 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.353 20:16:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.921 "name": "raid_bdev1", 00:19:48.921 "uuid": "543ebc18-1c19-457c-9c82-ff6557b6bcf0", 00:19:48.921 "strip_size_kb": 64, 00:19:48.921 "state": "online", 00:19:48.921 "raid_level": "raid5f", 00:19:48.921 "superblock": true, 00:19:48.921 "num_base_bdevs": 4, 00:19:48.921 "num_base_bdevs_discovered": 3, 00:19:48.921 "num_base_bdevs_operational": 3, 00:19:48.921 "base_bdevs_list": [ 00:19:48.921 { 00:19:48.921 "name": null, 00:19:48.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.921 "is_configured": false, 00:19:48.921 "data_offset": 0, 00:19:48.921 "data_size": 63488 00:19:48.921 }, 00:19:48.921 { 00:19:48.921 "name": "BaseBdev2", 00:19:48.921 "uuid": "a00da8f6-e005-54ce-98f1-84a71fb820fa", 00:19:48.921 "is_configured": true, 00:19:48.921 "data_offset": 2048, 00:19:48.921 "data_size": 63488 00:19:48.921 }, 00:19:48.921 { 00:19:48.921 "name": "BaseBdev3", 00:19:48.921 "uuid": "9b526862-12cb-5253-b448-8b9c1ce16037", 00:19:48.921 "is_configured": true, 00:19:48.921 "data_offset": 2048, 00:19:48.921 "data_size": 63488 00:19:48.921 }, 00:19:48.921 { 00:19:48.921 "name": "BaseBdev4", 00:19:48.921 "uuid": "cea7171e-e843-5b56-ac24-44c4ce42b51c", 00:19:48.921 "is_configured": true, 00:19:48.921 "data_offset": 2048, 00:19:48.921 "data_size": 63488 00:19:48.921 } 00:19:48.921 ] 00:19:48.921 }' 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85414 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 85414 ']' 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 85414 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85414 00:19:48.921 killing process with pid 85414 00:19:48.921 Received shutdown signal, test time was about 60.000000 seconds 00:19:48.921 00:19:48.921 Latency(us) 00:19:48.921 [2024-10-17T20:16:34.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.921 [2024-10-17T20:16:34.575Z] =================================================================================================================== 00:19:48.921 [2024-10-17T20:16:34.575Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85414' 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 85414 00:19:48.921 [2024-10-17 20:16:34.566836] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:48.921 20:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 85414 00:19:48.921 [2024-10-17 20:16:34.566997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.921 [2024-10-17 20:16:34.567187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:48.921 [2024-10-17 20:16:34.567212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:49.488 [2024-10-17 20:16:34.948934] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:50.424 20:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:50.424 00:19:50.424 real 0m28.552s 00:19:50.424 user 0m37.233s 00:19:50.424 sys 0m2.938s 00:19:50.424 20:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.424 20:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.424 ************************************ 00:19:50.424 END TEST raid5f_rebuild_test_sb 00:19:50.424 ************************************ 00:19:50.424 20:16:35 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:19:50.424 20:16:35 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:19:50.424 20:16:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:50.424 20:16:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.424 20:16:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:50.424 ************************************ 00:19:50.424 START TEST raid_state_function_test_sb_4k 00:19:50.424 ************************************ 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:50.424 Process raid pid: 86237 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86237 00:19:50.424 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:50.425 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86237' 00:19:50.425 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86237 00:19:50.425 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86237 ']' 00:19:50.425 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.425 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.425 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.425 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.425 20:16:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:50.425 [2024-10-17 20:16:36.038640] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:19:50.425 [2024-10-17 20:16:36.038829] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.683 [2024-10-17 20:16:36.213713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.683 [2024-10-17 20:16:36.334790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.942 [2024-10-17 20:16:36.519615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:50.942 [2024-10-17 20:16:36.519665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.509 [2024-10-17 20:16:37.028601] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:51.509 [2024-10-17 20:16:37.028676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:51.509 [2024-10-17 20:16:37.028692] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:51.509 [2024-10-17 20:16:37.028707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.509 "name": "Existed_Raid", 00:19:51.509 "uuid": "15e5bb2d-857c-4f30-ac9c-de0772c4a6fb", 00:19:51.509 "strip_size_kb": 0, 00:19:51.509 "state": "configuring", 00:19:51.509 "raid_level": "raid1", 00:19:51.509 "superblock": true, 00:19:51.509 "num_base_bdevs": 2, 00:19:51.509 "num_base_bdevs_discovered": 0, 00:19:51.509 "num_base_bdevs_operational": 2, 00:19:51.509 "base_bdevs_list": [ 00:19:51.509 { 00:19:51.509 "name": "BaseBdev1", 00:19:51.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.509 "is_configured": false, 00:19:51.509 "data_offset": 0, 00:19:51.509 "data_size": 0 00:19:51.509 }, 00:19:51.509 { 00:19:51.509 "name": "BaseBdev2", 00:19:51.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.509 "is_configured": false, 00:19:51.509 "data_offset": 0, 00:19:51.509 "data_size": 0 00:19:51.509 } 00:19:51.509 ] 00:19:51.509 }' 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.509 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.078 [2024-10-17 20:16:37.548649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:52.078 [2024-10-17 20:16:37.548688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.078 [2024-10-17 20:16:37.556682] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:52.078 [2024-10-17 20:16:37.556746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:52.078 [2024-10-17 20:16:37.556760] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:52.078 [2024-10-17 20:16:37.556778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.078 [2024-10-17 20:16:37.601272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:52.078 BaseBdev1 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.078 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.078 [ 00:19:52.078 { 00:19:52.078 "name": "BaseBdev1", 00:19:52.078 "aliases": [ 00:19:52.078 "81e0fad7-97a9-4be4-b79d-6a0f3c309edf" 00:19:52.078 ], 00:19:52.078 "product_name": "Malloc disk", 00:19:52.078 "block_size": 4096, 00:19:52.078 "num_blocks": 8192, 00:19:52.078 "uuid": "81e0fad7-97a9-4be4-b79d-6a0f3c309edf", 00:19:52.078 "assigned_rate_limits": { 00:19:52.078 "rw_ios_per_sec": 0, 00:19:52.078 "rw_mbytes_per_sec": 0, 00:19:52.078 "r_mbytes_per_sec": 0, 00:19:52.078 "w_mbytes_per_sec": 0 00:19:52.078 }, 00:19:52.078 "claimed": true, 00:19:52.078 "claim_type": "exclusive_write", 00:19:52.078 "zoned": false, 00:19:52.078 "supported_io_types": { 00:19:52.078 "read": true, 00:19:52.078 "write": true, 00:19:52.078 "unmap": true, 00:19:52.078 "flush": true, 00:19:52.078 "reset": true, 00:19:52.078 "nvme_admin": false, 00:19:52.078 "nvme_io": false, 00:19:52.078 "nvme_io_md": false, 00:19:52.078 "write_zeroes": true, 00:19:52.078 "zcopy": true, 00:19:52.078 "get_zone_info": false, 00:19:52.078 "zone_management": false, 00:19:52.078 "zone_append": false, 00:19:52.078 "compare": false, 00:19:52.078 "compare_and_write": false, 00:19:52.078 "abort": true, 00:19:52.078 "seek_hole": false, 00:19:52.078 "seek_data": false, 00:19:52.078 "copy": true, 00:19:52.078 "nvme_iov_md": false 00:19:52.078 }, 00:19:52.078 "memory_domains": [ 00:19:52.078 { 00:19:52.078 "dma_device_id": "system", 00:19:52.078 "dma_device_type": 1 00:19:52.078 }, 00:19:52.078 { 00:19:52.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.079 "dma_device_type": 2 00:19:52.079 } 00:19:52.079 ], 00:19:52.079 "driver_specific": {} 00:19:52.079 } 00:19:52.079 ] 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.079 "name": "Existed_Raid", 00:19:52.079 "uuid": "b616defa-88f9-4307-b61a-40d73ec265af", 00:19:52.079 "strip_size_kb": 0, 00:19:52.079 "state": "configuring", 00:19:52.079 "raid_level": "raid1", 00:19:52.079 "superblock": true, 00:19:52.079 "num_base_bdevs": 2, 00:19:52.079 "num_base_bdevs_discovered": 1, 00:19:52.079 "num_base_bdevs_operational": 2, 00:19:52.079 "base_bdevs_list": [ 00:19:52.079 { 00:19:52.079 "name": "BaseBdev1", 00:19:52.079 "uuid": "81e0fad7-97a9-4be4-b79d-6a0f3c309edf", 00:19:52.079 "is_configured": true, 00:19:52.079 "data_offset": 256, 00:19:52.079 "data_size": 7936 00:19:52.079 }, 00:19:52.079 { 00:19:52.079 "name": "BaseBdev2", 00:19:52.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.079 "is_configured": false, 00:19:52.079 "data_offset": 0, 00:19:52.079 "data_size": 0 00:19:52.079 } 00:19:52.079 ] 00:19:52.079 }' 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.079 20:16:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.646 [2024-10-17 20:16:38.165513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:52.646 [2024-10-17 20:16:38.165575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.646 [2024-10-17 20:16:38.173557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:52.646 [2024-10-17 20:16:38.176002] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:52.646 [2024-10-17 20:16:38.176286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.646 "name": "Existed_Raid", 00:19:52.646 "uuid": "177d21a9-0e11-403e-8e23-ca2a9b27614d", 00:19:52.646 "strip_size_kb": 0, 00:19:52.646 "state": "configuring", 00:19:52.646 "raid_level": "raid1", 00:19:52.646 "superblock": true, 00:19:52.646 "num_base_bdevs": 2, 00:19:52.646 "num_base_bdevs_discovered": 1, 00:19:52.646 "num_base_bdevs_operational": 2, 00:19:52.646 "base_bdevs_list": [ 00:19:52.646 { 00:19:52.646 "name": "BaseBdev1", 00:19:52.646 "uuid": "81e0fad7-97a9-4be4-b79d-6a0f3c309edf", 00:19:52.646 "is_configured": true, 00:19:52.646 "data_offset": 256, 00:19:52.646 "data_size": 7936 00:19:52.646 }, 00:19:52.646 { 00:19:52.646 "name": "BaseBdev2", 00:19:52.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.646 "is_configured": false, 00:19:52.646 "data_offset": 0, 00:19:52.646 "data_size": 0 00:19:52.646 } 00:19:52.646 ] 00:19:52.646 }' 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.646 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:53.214 [2024-10-17 20:16:38.754721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:53.214 [2024-10-17 20:16:38.755092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:53.214 [2024-10-17 20:16:38.755111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:53.214 BaseBdev2 00:19:53.214 [2024-10-17 20:16:38.755506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:53.214 [2024-10-17 20:16:38.755710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:53.214 [2024-10-17 20:16:38.755733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:53.214 [2024-10-17 20:16:38.755904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:53.214 [ 00:19:53.214 { 00:19:53.214 "name": "BaseBdev2", 00:19:53.214 "aliases": [ 00:19:53.214 "be80ec36-9fb4-4c91-aa49-14f900d8c8a7" 00:19:53.214 ], 00:19:53.214 "product_name": "Malloc disk", 00:19:53.214 "block_size": 4096, 00:19:53.214 "num_blocks": 8192, 00:19:53.214 "uuid": "be80ec36-9fb4-4c91-aa49-14f900d8c8a7", 00:19:53.214 "assigned_rate_limits": { 00:19:53.214 "rw_ios_per_sec": 0, 00:19:53.214 "rw_mbytes_per_sec": 0, 00:19:53.214 "r_mbytes_per_sec": 0, 00:19:53.214 "w_mbytes_per_sec": 0 00:19:53.214 }, 00:19:53.214 "claimed": true, 00:19:53.214 "claim_type": "exclusive_write", 00:19:53.214 "zoned": false, 00:19:53.214 "supported_io_types": { 00:19:53.214 "read": true, 00:19:53.214 "write": true, 00:19:53.214 "unmap": true, 00:19:53.214 "flush": true, 00:19:53.214 "reset": true, 00:19:53.214 "nvme_admin": false, 00:19:53.214 "nvme_io": false, 00:19:53.214 "nvme_io_md": false, 00:19:53.214 "write_zeroes": true, 00:19:53.214 "zcopy": true, 00:19:53.214 "get_zone_info": false, 00:19:53.214 "zone_management": false, 00:19:53.214 "zone_append": false, 00:19:53.214 "compare": false, 00:19:53.214 "compare_and_write": false, 00:19:53.214 "abort": true, 00:19:53.214 "seek_hole": false, 00:19:53.214 "seek_data": false, 00:19:53.214 "copy": true, 00:19:53.214 "nvme_iov_md": false 00:19:53.214 }, 00:19:53.214 "memory_domains": [ 00:19:53.214 { 00:19:53.214 "dma_device_id": "system", 00:19:53.214 "dma_device_type": 1 00:19:53.214 }, 00:19:53.214 { 00:19:53.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.214 "dma_device_type": 2 00:19:53.214 } 00:19:53.214 ], 00:19:53.214 "driver_specific": {} 00:19:53.214 } 00:19:53.214 ] 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.214 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.215 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:53.215 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.215 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.215 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.215 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.215 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.215 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.215 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.215 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:53.215 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.215 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.215 "name": "Existed_Raid", 00:19:53.215 "uuid": "177d21a9-0e11-403e-8e23-ca2a9b27614d", 00:19:53.215 "strip_size_kb": 0, 00:19:53.215 "state": "online", 00:19:53.215 "raid_level": "raid1", 00:19:53.215 "superblock": true, 00:19:53.215 "num_base_bdevs": 2, 00:19:53.215 "num_base_bdevs_discovered": 2, 00:19:53.215 "num_base_bdevs_operational": 2, 00:19:53.215 "base_bdevs_list": [ 00:19:53.215 { 00:19:53.215 "name": "BaseBdev1", 00:19:53.215 "uuid": "81e0fad7-97a9-4be4-b79d-6a0f3c309edf", 00:19:53.215 "is_configured": true, 00:19:53.215 "data_offset": 256, 00:19:53.215 "data_size": 7936 00:19:53.215 }, 00:19:53.215 { 00:19:53.215 "name": "BaseBdev2", 00:19:53.215 "uuid": "be80ec36-9fb4-4c91-aa49-14f900d8c8a7", 00:19:53.215 "is_configured": true, 00:19:53.215 "data_offset": 256, 00:19:53.215 "data_size": 7936 00:19:53.215 } 00:19:53.215 ] 00:19:53.215 }' 00:19:53.215 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.215 20:16:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:53.782 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:53.782 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:53.782 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:53.782 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:53.782 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:53.782 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:53.782 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:53.782 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.782 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:53.782 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:53.782 [2024-10-17 20:16:39.327274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:53.782 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.782 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:53.782 "name": "Existed_Raid", 00:19:53.782 "aliases": [ 00:19:53.782 "177d21a9-0e11-403e-8e23-ca2a9b27614d" 00:19:53.782 ], 00:19:53.782 "product_name": "Raid Volume", 00:19:53.782 "block_size": 4096, 00:19:53.782 "num_blocks": 7936, 00:19:53.782 "uuid": "177d21a9-0e11-403e-8e23-ca2a9b27614d", 00:19:53.782 "assigned_rate_limits": { 00:19:53.782 "rw_ios_per_sec": 0, 00:19:53.782 "rw_mbytes_per_sec": 0, 00:19:53.782 "r_mbytes_per_sec": 0, 00:19:53.782 "w_mbytes_per_sec": 0 00:19:53.782 }, 00:19:53.782 "claimed": false, 00:19:53.782 "zoned": false, 00:19:53.782 "supported_io_types": { 00:19:53.782 "read": true, 00:19:53.782 "write": true, 00:19:53.782 "unmap": false, 00:19:53.782 "flush": false, 00:19:53.782 "reset": true, 00:19:53.782 "nvme_admin": false, 00:19:53.782 "nvme_io": false, 00:19:53.782 "nvme_io_md": false, 00:19:53.782 "write_zeroes": true, 00:19:53.782 "zcopy": false, 00:19:53.782 "get_zone_info": false, 00:19:53.782 "zone_management": false, 00:19:53.782 "zone_append": false, 00:19:53.782 "compare": false, 00:19:53.782 "compare_and_write": false, 00:19:53.782 "abort": false, 00:19:53.782 "seek_hole": false, 00:19:53.782 "seek_data": false, 00:19:53.782 "copy": false, 00:19:53.782 "nvme_iov_md": false 00:19:53.782 }, 00:19:53.782 "memory_domains": [ 00:19:53.782 { 00:19:53.782 "dma_device_id": "system", 00:19:53.782 "dma_device_type": 1 00:19:53.782 }, 00:19:53.782 { 00:19:53.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.782 "dma_device_type": 2 00:19:53.782 }, 00:19:53.782 { 00:19:53.782 "dma_device_id": "system", 00:19:53.782 "dma_device_type": 1 00:19:53.782 }, 00:19:53.782 { 00:19:53.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.782 "dma_device_type": 2 00:19:53.782 } 00:19:53.782 ], 00:19:53.782 "driver_specific": { 00:19:53.782 "raid": { 00:19:53.782 "uuid": "177d21a9-0e11-403e-8e23-ca2a9b27614d", 00:19:53.782 "strip_size_kb": 0, 00:19:53.782 "state": "online", 00:19:53.782 "raid_level": "raid1", 00:19:53.782 "superblock": true, 00:19:53.782 "num_base_bdevs": 2, 00:19:53.782 "num_base_bdevs_discovered": 2, 00:19:53.782 "num_base_bdevs_operational": 2, 00:19:53.782 "base_bdevs_list": [ 00:19:53.782 { 00:19:53.782 "name": "BaseBdev1", 00:19:53.782 "uuid": "81e0fad7-97a9-4be4-b79d-6a0f3c309edf", 00:19:53.782 "is_configured": true, 00:19:53.782 "data_offset": 256, 00:19:53.782 "data_size": 7936 00:19:53.782 }, 00:19:53.782 { 00:19:53.782 "name": "BaseBdev2", 00:19:53.782 "uuid": "be80ec36-9fb4-4c91-aa49-14f900d8c8a7", 00:19:53.782 "is_configured": true, 00:19:53.782 "data_offset": 256, 00:19:53.782 "data_size": 7936 00:19:53.782 } 00:19:53.782 ] 00:19:53.782 } 00:19:53.782 } 00:19:53.782 }' 00:19:53.782 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:54.040 BaseBdev2' 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.040 [2024-10-17 20:16:39.603061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:54.040 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:54.041 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:54.041 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:54.041 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:54.041 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.041 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.041 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.041 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:54.041 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.041 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.299 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.299 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.299 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.299 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.299 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.299 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.299 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.299 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.299 "name": "Existed_Raid", 00:19:54.299 "uuid": "177d21a9-0e11-403e-8e23-ca2a9b27614d", 00:19:54.299 "strip_size_kb": 0, 00:19:54.299 "state": "online", 00:19:54.299 "raid_level": "raid1", 00:19:54.299 "superblock": true, 00:19:54.299 "num_base_bdevs": 2, 00:19:54.299 "num_base_bdevs_discovered": 1, 00:19:54.299 "num_base_bdevs_operational": 1, 00:19:54.299 "base_bdevs_list": [ 00:19:54.299 { 00:19:54.299 "name": null, 00:19:54.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.299 "is_configured": false, 00:19:54.299 "data_offset": 0, 00:19:54.299 "data_size": 7936 00:19:54.299 }, 00:19:54.299 { 00:19:54.299 "name": "BaseBdev2", 00:19:54.299 "uuid": "be80ec36-9fb4-4c91-aa49-14f900d8c8a7", 00:19:54.299 "is_configured": true, 00:19:54.299 "data_offset": 256, 00:19:54.299 "data_size": 7936 00:19:54.299 } 00:19:54.299 ] 00:19:54.299 }' 00:19:54.299 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.299 20:16:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.866 [2024-10-17 20:16:40.288257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:54.866 [2024-10-17 20:16:40.288383] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:54.866 [2024-10-17 20:16:40.362568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.866 [2024-10-17 20:16:40.362891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:54.866 [2024-10-17 20:16:40.362922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86237 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86237 ']' 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86237 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86237 00:19:54.866 killing process with pid 86237 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86237' 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86237 00:19:54.866 [2024-10-17 20:16:40.453614] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:54.866 20:16:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86237 00:19:54.866 [2024-10-17 20:16:40.467768] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:55.800 20:16:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:19:55.800 00:19:55.800 real 0m5.525s 00:19:55.800 user 0m8.385s 00:19:55.800 sys 0m0.830s 00:19:55.800 20:16:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:55.800 ************************************ 00:19:55.800 END TEST raid_state_function_test_sb_4k 00:19:55.800 ************************************ 00:19:55.800 20:16:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.059 20:16:41 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:56.059 20:16:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:56.059 20:16:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:56.059 20:16:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:56.059 ************************************ 00:19:56.059 START TEST raid_superblock_test_4k 00:19:56.059 ************************************ 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86489 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86489 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 86489 ']' 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.059 20:16:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.059 [2024-10-17 20:16:41.616514] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:19:56.059 [2024-10-17 20:16:41.616733] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86489 ] 00:19:56.318 [2024-10-17 20:16:41.792172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.318 [2024-10-17 20:16:41.909757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.577 [2024-10-17 20:16:42.111472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:56.577 [2024-10-17 20:16:42.111542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.146 malloc1 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.146 [2024-10-17 20:16:42.603534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:57.146 [2024-10-17 20:16:42.603772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.146 [2024-10-17 20:16:42.603852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:57.146 [2024-10-17 20:16:42.604122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.146 [2024-10-17 20:16:42.607105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.146 [2024-10-17 20:16:42.607275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:57.146 pt1 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.146 malloc2 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.146 [2024-10-17 20:16:42.660884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:57.146 [2024-10-17 20:16:42.660960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.146 [2024-10-17 20:16:42.660991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:57.146 [2024-10-17 20:16:42.661031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.146 [2024-10-17 20:16:42.663902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.146 [2024-10-17 20:16:42.663949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:57.146 pt2 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.146 [2024-10-17 20:16:42.668963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:57.146 [2024-10-17 20:16:42.671472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:57.146 [2024-10-17 20:16:42.671741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:57.146 [2024-10-17 20:16:42.671761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:57.146 [2024-10-17 20:16:42.672130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:57.146 [2024-10-17 20:16:42.672364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:57.146 [2024-10-17 20:16:42.672392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:57.146 [2024-10-17 20:16:42.672597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.146 "name": "raid_bdev1", 00:19:57.146 "uuid": "b50253c8-745c-438e-b4b0-8cd8b2daa2f6", 00:19:57.146 "strip_size_kb": 0, 00:19:57.146 "state": "online", 00:19:57.146 "raid_level": "raid1", 00:19:57.146 "superblock": true, 00:19:57.146 "num_base_bdevs": 2, 00:19:57.146 "num_base_bdevs_discovered": 2, 00:19:57.146 "num_base_bdevs_operational": 2, 00:19:57.146 "base_bdevs_list": [ 00:19:57.146 { 00:19:57.146 "name": "pt1", 00:19:57.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.146 "is_configured": true, 00:19:57.146 "data_offset": 256, 00:19:57.146 "data_size": 7936 00:19:57.146 }, 00:19:57.146 { 00:19:57.146 "name": "pt2", 00:19:57.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.146 "is_configured": true, 00:19:57.146 "data_offset": 256, 00:19:57.146 "data_size": 7936 00:19:57.146 } 00:19:57.146 ] 00:19:57.146 }' 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.146 20:16:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.714 [2024-10-17 20:16:43.201456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:57.714 "name": "raid_bdev1", 00:19:57.714 "aliases": [ 00:19:57.714 "b50253c8-745c-438e-b4b0-8cd8b2daa2f6" 00:19:57.714 ], 00:19:57.714 "product_name": "Raid Volume", 00:19:57.714 "block_size": 4096, 00:19:57.714 "num_blocks": 7936, 00:19:57.714 "uuid": "b50253c8-745c-438e-b4b0-8cd8b2daa2f6", 00:19:57.714 "assigned_rate_limits": { 00:19:57.714 "rw_ios_per_sec": 0, 00:19:57.714 "rw_mbytes_per_sec": 0, 00:19:57.714 "r_mbytes_per_sec": 0, 00:19:57.714 "w_mbytes_per_sec": 0 00:19:57.714 }, 00:19:57.714 "claimed": false, 00:19:57.714 "zoned": false, 00:19:57.714 "supported_io_types": { 00:19:57.714 "read": true, 00:19:57.714 "write": true, 00:19:57.714 "unmap": false, 00:19:57.714 "flush": false, 00:19:57.714 "reset": true, 00:19:57.714 "nvme_admin": false, 00:19:57.714 "nvme_io": false, 00:19:57.714 "nvme_io_md": false, 00:19:57.714 "write_zeroes": true, 00:19:57.714 "zcopy": false, 00:19:57.714 "get_zone_info": false, 00:19:57.714 "zone_management": false, 00:19:57.714 "zone_append": false, 00:19:57.714 "compare": false, 00:19:57.714 "compare_and_write": false, 00:19:57.714 "abort": false, 00:19:57.714 "seek_hole": false, 00:19:57.714 "seek_data": false, 00:19:57.714 "copy": false, 00:19:57.714 "nvme_iov_md": false 00:19:57.714 }, 00:19:57.714 "memory_domains": [ 00:19:57.714 { 00:19:57.714 "dma_device_id": "system", 00:19:57.714 "dma_device_type": 1 00:19:57.714 }, 00:19:57.714 { 00:19:57.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.714 "dma_device_type": 2 00:19:57.714 }, 00:19:57.714 { 00:19:57.714 "dma_device_id": "system", 00:19:57.714 "dma_device_type": 1 00:19:57.714 }, 00:19:57.714 { 00:19:57.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.714 "dma_device_type": 2 00:19:57.714 } 00:19:57.714 ], 00:19:57.714 "driver_specific": { 00:19:57.714 "raid": { 00:19:57.714 "uuid": "b50253c8-745c-438e-b4b0-8cd8b2daa2f6", 00:19:57.714 "strip_size_kb": 0, 00:19:57.714 "state": "online", 00:19:57.714 "raid_level": "raid1", 00:19:57.714 "superblock": true, 00:19:57.714 "num_base_bdevs": 2, 00:19:57.714 "num_base_bdevs_discovered": 2, 00:19:57.714 "num_base_bdevs_operational": 2, 00:19:57.714 "base_bdevs_list": [ 00:19:57.714 { 00:19:57.714 "name": "pt1", 00:19:57.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.714 "is_configured": true, 00:19:57.714 "data_offset": 256, 00:19:57.714 "data_size": 7936 00:19:57.714 }, 00:19:57.714 { 00:19:57.714 "name": "pt2", 00:19:57.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.714 "is_configured": true, 00:19:57.714 "data_offset": 256, 00:19:57.714 "data_size": 7936 00:19:57.714 } 00:19:57.714 ] 00:19:57.714 } 00:19:57.714 } 00:19:57.714 }' 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:57.714 pt2' 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.714 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:57.974 [2024-10-17 20:16:43.445489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b50253c8-745c-438e-b4b0-8cd8b2daa2f6 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z b50253c8-745c-438e-b4b0-8cd8b2daa2f6 ']' 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.974 [2024-10-17 20:16:43.497176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.974 [2024-10-17 20:16:43.497338] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.974 [2024-10-17 20:16:43.497494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.974 [2024-10-17 20:16:43.497571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.974 [2024-10-17 20:16:43.497592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.974 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.232 [2024-10-17 20:16:43.629219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:58.232 [2024-10-17 20:16:43.631749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:58.232 [2024-10-17 20:16:43.631834] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:58.232 [2024-10-17 20:16:43.631921] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:58.232 [2024-10-17 20:16:43.631946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.232 [2024-10-17 20:16:43.631975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:58.232 request: 00:19:58.232 { 00:19:58.232 "name": "raid_bdev1", 00:19:58.232 "raid_level": "raid1", 00:19:58.232 "base_bdevs": [ 00:19:58.232 "malloc1", 00:19:58.232 "malloc2" 00:19:58.232 ], 00:19:58.232 "superblock": false, 00:19:58.232 "method": "bdev_raid_create", 00:19:58.232 "req_id": 1 00:19:58.232 } 00:19:58.232 Got JSON-RPC error response 00:19:58.232 response: 00:19:58.232 { 00:19:58.232 "code": -17, 00:19:58.232 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:58.232 } 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.232 [2024-10-17 20:16:43.701206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:58.232 [2024-10-17 20:16:43.701272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.232 [2024-10-17 20:16:43.701299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:58.232 [2024-10-17 20:16:43.701316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.232 [2024-10-17 20:16:43.704514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.232 [2024-10-17 20:16:43.704590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:58.232 [2024-10-17 20:16:43.704708] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:58.232 [2024-10-17 20:16:43.704794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:58.232 pt1 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.232 "name": "raid_bdev1", 00:19:58.232 "uuid": "b50253c8-745c-438e-b4b0-8cd8b2daa2f6", 00:19:58.232 "strip_size_kb": 0, 00:19:58.232 "state": "configuring", 00:19:58.232 "raid_level": "raid1", 00:19:58.232 "superblock": true, 00:19:58.232 "num_base_bdevs": 2, 00:19:58.232 "num_base_bdevs_discovered": 1, 00:19:58.232 "num_base_bdevs_operational": 2, 00:19:58.232 "base_bdevs_list": [ 00:19:58.232 { 00:19:58.232 "name": "pt1", 00:19:58.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:58.232 "is_configured": true, 00:19:58.232 "data_offset": 256, 00:19:58.232 "data_size": 7936 00:19:58.232 }, 00:19:58.232 { 00:19:58.232 "name": null, 00:19:58.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.232 "is_configured": false, 00:19:58.232 "data_offset": 256, 00:19:58.232 "data_size": 7936 00:19:58.232 } 00:19:58.232 ] 00:19:58.232 }' 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.232 20:16:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.798 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:58.798 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:58.798 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:58.798 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:58.798 20:16:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.798 20:16:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.798 [2024-10-17 20:16:44.289415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:58.798 [2024-10-17 20:16:44.289775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.798 [2024-10-17 20:16:44.289816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:58.798 [2024-10-17 20:16:44.289836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.798 [2024-10-17 20:16:44.290525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.798 [2024-10-17 20:16:44.290566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:58.798 [2024-10-17 20:16:44.290672] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:58.798 [2024-10-17 20:16:44.290710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:58.798 [2024-10-17 20:16:44.290861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:58.798 [2024-10-17 20:16:44.290887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:58.798 [2024-10-17 20:16:44.291243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:58.798 [2024-10-17 20:16:44.291466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:58.798 [2024-10-17 20:16:44.291483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:58.798 [2024-10-17 20:16:44.291709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.798 pt2 00:19:58.798 20:16:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.798 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:58.798 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:58.798 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:58.798 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.799 "name": "raid_bdev1", 00:19:58.799 "uuid": "b50253c8-745c-438e-b4b0-8cd8b2daa2f6", 00:19:58.799 "strip_size_kb": 0, 00:19:58.799 "state": "online", 00:19:58.799 "raid_level": "raid1", 00:19:58.799 "superblock": true, 00:19:58.799 "num_base_bdevs": 2, 00:19:58.799 "num_base_bdevs_discovered": 2, 00:19:58.799 "num_base_bdevs_operational": 2, 00:19:58.799 "base_bdevs_list": [ 00:19:58.799 { 00:19:58.799 "name": "pt1", 00:19:58.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:58.799 "is_configured": true, 00:19:58.799 "data_offset": 256, 00:19:58.799 "data_size": 7936 00:19:58.799 }, 00:19:58.799 { 00:19:58.799 "name": "pt2", 00:19:58.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.799 "is_configured": true, 00:19:58.799 "data_offset": 256, 00:19:58.799 "data_size": 7936 00:19:58.799 } 00:19:58.799 ] 00:19:58.799 }' 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.799 20:16:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.412 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:59.412 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:59.412 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:59.412 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:59.412 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:59.412 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:59.412 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:59.412 20:16:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.412 20:16:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.413 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:59.413 [2024-10-17 20:16:44.886185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:59.413 20:16:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.413 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:59.413 "name": "raid_bdev1", 00:19:59.413 "aliases": [ 00:19:59.413 "b50253c8-745c-438e-b4b0-8cd8b2daa2f6" 00:19:59.413 ], 00:19:59.413 "product_name": "Raid Volume", 00:19:59.413 "block_size": 4096, 00:19:59.413 "num_blocks": 7936, 00:19:59.413 "uuid": "b50253c8-745c-438e-b4b0-8cd8b2daa2f6", 00:19:59.413 "assigned_rate_limits": { 00:19:59.413 "rw_ios_per_sec": 0, 00:19:59.413 "rw_mbytes_per_sec": 0, 00:19:59.413 "r_mbytes_per_sec": 0, 00:19:59.413 "w_mbytes_per_sec": 0 00:19:59.413 }, 00:19:59.413 "claimed": false, 00:19:59.413 "zoned": false, 00:19:59.413 "supported_io_types": { 00:19:59.413 "read": true, 00:19:59.413 "write": true, 00:19:59.413 "unmap": false, 00:19:59.413 "flush": false, 00:19:59.413 "reset": true, 00:19:59.413 "nvme_admin": false, 00:19:59.413 "nvme_io": false, 00:19:59.413 "nvme_io_md": false, 00:19:59.413 "write_zeroes": true, 00:19:59.413 "zcopy": false, 00:19:59.413 "get_zone_info": false, 00:19:59.413 "zone_management": false, 00:19:59.413 "zone_append": false, 00:19:59.413 "compare": false, 00:19:59.413 "compare_and_write": false, 00:19:59.413 "abort": false, 00:19:59.413 "seek_hole": false, 00:19:59.413 "seek_data": false, 00:19:59.413 "copy": false, 00:19:59.413 "nvme_iov_md": false 00:19:59.413 }, 00:19:59.413 "memory_domains": [ 00:19:59.413 { 00:19:59.413 "dma_device_id": "system", 00:19:59.413 "dma_device_type": 1 00:19:59.413 }, 00:19:59.413 { 00:19:59.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.413 "dma_device_type": 2 00:19:59.413 }, 00:19:59.413 { 00:19:59.413 "dma_device_id": "system", 00:19:59.413 "dma_device_type": 1 00:19:59.413 }, 00:19:59.413 { 00:19:59.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.413 "dma_device_type": 2 00:19:59.413 } 00:19:59.413 ], 00:19:59.413 "driver_specific": { 00:19:59.413 "raid": { 00:19:59.413 "uuid": "b50253c8-745c-438e-b4b0-8cd8b2daa2f6", 00:19:59.413 "strip_size_kb": 0, 00:19:59.413 "state": "online", 00:19:59.413 "raid_level": "raid1", 00:19:59.413 "superblock": true, 00:19:59.413 "num_base_bdevs": 2, 00:19:59.413 "num_base_bdevs_discovered": 2, 00:19:59.413 "num_base_bdevs_operational": 2, 00:19:59.413 "base_bdevs_list": [ 00:19:59.413 { 00:19:59.413 "name": "pt1", 00:19:59.413 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:59.413 "is_configured": true, 00:19:59.413 "data_offset": 256, 00:19:59.413 "data_size": 7936 00:19:59.413 }, 00:19:59.413 { 00:19:59.413 "name": "pt2", 00:19:59.413 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.413 "is_configured": true, 00:19:59.413 "data_offset": 256, 00:19:59.413 "data_size": 7936 00:19:59.413 } 00:19:59.413 ] 00:19:59.413 } 00:19:59.413 } 00:19:59.413 }' 00:19:59.413 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:59.413 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:59.413 pt2' 00:19:59.413 20:16:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.413 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:59.413 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:59.413 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:59.413 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.413 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.413 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.413 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:59.694 [2024-10-17 20:16:45.134249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' b50253c8-745c-438e-b4b0-8cd8b2daa2f6 '!=' b50253c8-745c-438e-b4b0-8cd8b2daa2f6 ']' 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.694 [2024-10-17 20:16:45.178029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.694 "name": "raid_bdev1", 00:19:59.694 "uuid": "b50253c8-745c-438e-b4b0-8cd8b2daa2f6", 00:19:59.694 "strip_size_kb": 0, 00:19:59.694 "state": "online", 00:19:59.694 "raid_level": "raid1", 00:19:59.694 "superblock": true, 00:19:59.694 "num_base_bdevs": 2, 00:19:59.694 "num_base_bdevs_discovered": 1, 00:19:59.694 "num_base_bdevs_operational": 1, 00:19:59.694 "base_bdevs_list": [ 00:19:59.694 { 00:19:59.694 "name": null, 00:19:59.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.694 "is_configured": false, 00:19:59.694 "data_offset": 0, 00:19:59.694 "data_size": 7936 00:19:59.694 }, 00:19:59.694 { 00:19:59.694 "name": "pt2", 00:19:59.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.694 "is_configured": true, 00:19:59.694 "data_offset": 256, 00:19:59.694 "data_size": 7936 00:19:59.694 } 00:19:59.694 ] 00:19:59.694 }' 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.694 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.263 [2024-10-17 20:16:45.654095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.263 [2024-10-17 20:16:45.654400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:00.263 [2024-10-17 20:16:45.654517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.263 [2024-10-17 20:16:45.654586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.263 [2024-10-17 20:16:45.654605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.263 [2024-10-17 20:16:45.726096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:00.263 [2024-10-17 20:16:45.726157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.263 [2024-10-17 20:16:45.726179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:00.263 [2024-10-17 20:16:45.726194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.263 [2024-10-17 20:16:45.729045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.263 [2024-10-17 20:16:45.729279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:00.263 [2024-10-17 20:16:45.729418] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:00.263 [2024-10-17 20:16:45.729480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:00.263 [2024-10-17 20:16:45.729615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:00.263 [2024-10-17 20:16:45.729635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:00.263 [2024-10-17 20:16:45.729901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:00.263 [2024-10-17 20:16:45.730138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:00.263 [2024-10-17 20:16:45.730154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:00.263 [2024-10-17 20:16:45.730356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.263 pt2 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.263 "name": "raid_bdev1", 00:20:00.263 "uuid": "b50253c8-745c-438e-b4b0-8cd8b2daa2f6", 00:20:00.263 "strip_size_kb": 0, 00:20:00.263 "state": "online", 00:20:00.263 "raid_level": "raid1", 00:20:00.263 "superblock": true, 00:20:00.263 "num_base_bdevs": 2, 00:20:00.263 "num_base_bdevs_discovered": 1, 00:20:00.263 "num_base_bdevs_operational": 1, 00:20:00.263 "base_bdevs_list": [ 00:20:00.263 { 00:20:00.263 "name": null, 00:20:00.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.263 "is_configured": false, 00:20:00.263 "data_offset": 256, 00:20:00.263 "data_size": 7936 00:20:00.263 }, 00:20:00.263 { 00:20:00.263 "name": "pt2", 00:20:00.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:00.263 "is_configured": true, 00:20:00.263 "data_offset": 256, 00:20:00.263 "data_size": 7936 00:20:00.263 } 00:20:00.263 ] 00:20:00.263 }' 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.263 20:16:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.830 [2024-10-17 20:16:46.210411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.830 [2024-10-17 20:16:46.210452] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:00.830 [2024-10-17 20:16:46.210537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.830 [2024-10-17 20:16:46.210598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.830 [2024-10-17 20:16:46.210612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.830 [2024-10-17 20:16:46.270431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:00.830 [2024-10-17 20:16:46.270511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.830 [2024-10-17 20:16:46.270539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:00.830 [2024-10-17 20:16:46.270552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.830 [2024-10-17 20:16:46.273468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.830 [2024-10-17 20:16:46.273511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:00.830 [2024-10-17 20:16:46.273626] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:00.830 [2024-10-17 20:16:46.273681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:00.830 [2024-10-17 20:16:46.273839] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:00.830 [2024-10-17 20:16:46.273856] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.830 [2024-10-17 20:16:46.273876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:00.830 [2024-10-17 20:16:46.273944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:00.830 [2024-10-17 20:16:46.274107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:00.830 [2024-10-17 20:16:46.274123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:00.830 [2024-10-17 20:16:46.274422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:00.830 [2024-10-17 20:16:46.274607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:00.830 [2024-10-17 20:16:46.274626] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:00.830 [2024-10-17 20:16:46.274893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.830 pt1 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.830 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.831 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.831 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.831 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.831 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.831 "name": "raid_bdev1", 00:20:00.831 "uuid": "b50253c8-745c-438e-b4b0-8cd8b2daa2f6", 00:20:00.831 "strip_size_kb": 0, 00:20:00.831 "state": "online", 00:20:00.831 "raid_level": "raid1", 00:20:00.831 "superblock": true, 00:20:00.831 "num_base_bdevs": 2, 00:20:00.831 "num_base_bdevs_discovered": 1, 00:20:00.831 "num_base_bdevs_operational": 1, 00:20:00.831 "base_bdevs_list": [ 00:20:00.831 { 00:20:00.831 "name": null, 00:20:00.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.831 "is_configured": false, 00:20:00.831 "data_offset": 256, 00:20:00.831 "data_size": 7936 00:20:00.831 }, 00:20:00.831 { 00:20:00.831 "name": "pt2", 00:20:00.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:00.831 "is_configured": true, 00:20:00.831 "data_offset": 256, 00:20:00.831 "data_size": 7936 00:20:00.831 } 00:20:00.831 ] 00:20:00.831 }' 00:20:00.831 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.831 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.397 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.398 [2024-10-17 20:16:46.823266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' b50253c8-745c-438e-b4b0-8cd8b2daa2f6 '!=' b50253c8-745c-438e-b4b0-8cd8b2daa2f6 ']' 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86489 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 86489 ']' 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 86489 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86489 00:20:01.398 killing process with pid 86489 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86489' 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 86489 00:20:01.398 [2024-10-17 20:16:46.883468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:01.398 20:16:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 86489 00:20:01.398 [2024-10-17 20:16:46.883561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.398 [2024-10-17 20:16:46.883633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:01.398 [2024-10-17 20:16:46.883661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:01.655 [2024-10-17 20:16:47.053115] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:02.591 ************************************ 00:20:02.591 END TEST raid_superblock_test_4k 00:20:02.591 ************************************ 00:20:02.591 20:16:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:20:02.591 00:20:02.591 real 0m6.632s 00:20:02.591 user 0m10.451s 00:20:02.591 sys 0m0.962s 00:20:02.591 20:16:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:02.591 20:16:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.591 20:16:48 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:20:02.591 20:16:48 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:20:02.591 20:16:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:02.591 20:16:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:02.591 20:16:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:02.591 ************************************ 00:20:02.591 START TEST raid_rebuild_test_sb_4k 00:20:02.591 ************************************ 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86818 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86818 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86818 ']' 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:02.591 20:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.860 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:02.861 Zero copy mechanism will not be used. 00:20:02.861 [2024-10-17 20:16:48.320197] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:20:02.861 [2024-10-17 20:16:48.320401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86818 ] 00:20:02.861 [2024-10-17 20:16:48.496067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.132 [2024-10-17 20:16:48.657773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.390 [2024-10-17 20:16:48.869113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:03.390 [2024-10-17 20:16:48.869377] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:03.648 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:03.648 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:20:03.648 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:03.648 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:20:03.648 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.648 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.906 BaseBdev1_malloc 00:20:03.906 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.906 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:03.906 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.906 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.906 [2024-10-17 20:16:49.344028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:03.906 [2024-10-17 20:16:49.344393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.906 [2024-10-17 20:16:49.344441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:03.906 [2024-10-17 20:16:49.344465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.906 [2024-10-17 20:16:49.347532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.906 [2024-10-17 20:16:49.347717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:03.906 BaseBdev1 00:20:03.906 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.906 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:03.906 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:20:03.906 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.906 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.906 BaseBdev2_malloc 00:20:03.906 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.906 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:03.906 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.906 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.906 [2024-10-17 20:16:49.402955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:03.907 [2024-10-17 20:16:49.403073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.907 [2024-10-17 20:16:49.403107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:03.907 [2024-10-17 20:16:49.403128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.907 [2024-10-17 20:16:49.405977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.907 [2024-10-17 20:16:49.406323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:03.907 BaseBdev2 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.907 spare_malloc 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.907 spare_delay 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.907 [2024-10-17 20:16:49.486104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:03.907 [2024-10-17 20:16:49.486182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.907 [2024-10-17 20:16:49.486215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:03.907 [2024-10-17 20:16:49.486235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.907 [2024-10-17 20:16:49.489359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.907 [2024-10-17 20:16:49.489652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:03.907 spare 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.907 [2024-10-17 20:16:49.498364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.907 [2024-10-17 20:16:49.501089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:03.907 [2024-10-17 20:16:49.501476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:03.907 [2024-10-17 20:16:49.501621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:03.907 [2024-10-17 20:16:49.502086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:03.907 [2024-10-17 20:16:49.502439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:03.907 [2024-10-17 20:16:49.502579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:03.907 [2024-10-17 20:16:49.502957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.907 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.165 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.165 "name": "raid_bdev1", 00:20:04.165 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:04.165 "strip_size_kb": 0, 00:20:04.165 "state": "online", 00:20:04.165 "raid_level": "raid1", 00:20:04.165 "superblock": true, 00:20:04.165 "num_base_bdevs": 2, 00:20:04.165 "num_base_bdevs_discovered": 2, 00:20:04.165 "num_base_bdevs_operational": 2, 00:20:04.165 "base_bdevs_list": [ 00:20:04.165 { 00:20:04.165 "name": "BaseBdev1", 00:20:04.165 "uuid": "c2b3a9de-e2e5-59f2-8ce8-29b9fdec56c7", 00:20:04.165 "is_configured": true, 00:20:04.165 "data_offset": 256, 00:20:04.165 "data_size": 7936 00:20:04.165 }, 00:20:04.165 { 00:20:04.165 "name": "BaseBdev2", 00:20:04.165 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:04.165 "is_configured": true, 00:20:04.165 "data_offset": 256, 00:20:04.165 "data_size": 7936 00:20:04.165 } 00:20:04.165 ] 00:20:04.165 }' 00:20:04.165 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.165 20:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.424 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:04.424 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.424 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.424 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:04.424 [2024-10-17 20:16:50.015509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.424 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.424 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:04.424 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.424 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.424 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.424 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:04.683 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:04.942 [2024-10-17 20:16:50.387356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:04.942 /dev/nbd0 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:04.942 1+0 records in 00:20:04.942 1+0 records out 00:20:04.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004537 s, 9.0 MB/s 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:04.942 20:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:05.877 7936+0 records in 00:20:05.877 7936+0 records out 00:20:05.877 32505856 bytes (33 MB, 31 MiB) copied, 0.967959 s, 33.6 MB/s 00:20:05.877 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:05.877 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:05.877 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:05.877 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:05.877 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:05.877 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:05.877 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:06.136 [2024-10-17 20:16:51.670709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.136 [2024-10-17 20:16:51.678793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.136 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.137 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:06.137 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.137 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.137 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.137 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.137 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.137 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.137 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.137 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.137 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.137 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.137 "name": "raid_bdev1", 00:20:06.137 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:06.137 "strip_size_kb": 0, 00:20:06.137 "state": "online", 00:20:06.137 "raid_level": "raid1", 00:20:06.137 "superblock": true, 00:20:06.137 "num_base_bdevs": 2, 00:20:06.137 "num_base_bdevs_discovered": 1, 00:20:06.137 "num_base_bdevs_operational": 1, 00:20:06.137 "base_bdevs_list": [ 00:20:06.137 { 00:20:06.137 "name": null, 00:20:06.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.137 "is_configured": false, 00:20:06.137 "data_offset": 0, 00:20:06.137 "data_size": 7936 00:20:06.137 }, 00:20:06.137 { 00:20:06.137 "name": "BaseBdev2", 00:20:06.137 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:06.137 "is_configured": true, 00:20:06.137 "data_offset": 256, 00:20:06.137 "data_size": 7936 00:20:06.137 } 00:20:06.137 ] 00:20:06.137 }' 00:20:06.137 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.137 20:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.726 20:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:06.726 20:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.726 20:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.726 [2024-10-17 20:16:52.194969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:06.726 [2024-10-17 20:16:52.210656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:06.726 20:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.726 20:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:06.726 [2024-10-17 20:16:52.213139] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:07.664 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.664 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.664 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:07.664 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:07.664 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.664 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.664 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.664 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.664 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.664 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.664 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.664 "name": "raid_bdev1", 00:20:07.664 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:07.664 "strip_size_kb": 0, 00:20:07.664 "state": "online", 00:20:07.664 "raid_level": "raid1", 00:20:07.664 "superblock": true, 00:20:07.664 "num_base_bdevs": 2, 00:20:07.664 "num_base_bdevs_discovered": 2, 00:20:07.664 "num_base_bdevs_operational": 2, 00:20:07.664 "process": { 00:20:07.664 "type": "rebuild", 00:20:07.664 "target": "spare", 00:20:07.664 "progress": { 00:20:07.664 "blocks": 2560, 00:20:07.664 "percent": 32 00:20:07.664 } 00:20:07.664 }, 00:20:07.664 "base_bdevs_list": [ 00:20:07.664 { 00:20:07.664 "name": "spare", 00:20:07.664 "uuid": "b52bfa4a-d05f-5723-aaaa-b668afbcec88", 00:20:07.664 "is_configured": true, 00:20:07.664 "data_offset": 256, 00:20:07.664 "data_size": 7936 00:20:07.664 }, 00:20:07.664 { 00:20:07.664 "name": "BaseBdev2", 00:20:07.664 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:07.664 "is_configured": true, 00:20:07.664 "data_offset": 256, 00:20:07.664 "data_size": 7936 00:20:07.664 } 00:20:07.664 ] 00:20:07.664 }' 00:20:07.664 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:07.922 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:07.922 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:07.922 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.923 [2024-10-17 20:16:53.382161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:07.923 [2024-10-17 20:16:53.421192] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:07.923 [2024-10-17 20:16:53.421266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.923 [2024-10-17 20:16:53.421289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:07.923 [2024-10-17 20:16:53.421308] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.923 "name": "raid_bdev1", 00:20:07.923 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:07.923 "strip_size_kb": 0, 00:20:07.923 "state": "online", 00:20:07.923 "raid_level": "raid1", 00:20:07.923 "superblock": true, 00:20:07.923 "num_base_bdevs": 2, 00:20:07.923 "num_base_bdevs_discovered": 1, 00:20:07.923 "num_base_bdevs_operational": 1, 00:20:07.923 "base_bdevs_list": [ 00:20:07.923 { 00:20:07.923 "name": null, 00:20:07.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.923 "is_configured": false, 00:20:07.923 "data_offset": 0, 00:20:07.923 "data_size": 7936 00:20:07.923 }, 00:20:07.923 { 00:20:07.923 "name": "BaseBdev2", 00:20:07.923 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:07.923 "is_configured": true, 00:20:07.923 "data_offset": 256, 00:20:07.923 "data_size": 7936 00:20:07.923 } 00:20:07.923 ] 00:20:07.923 }' 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.923 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.498 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:08.498 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.498 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:08.498 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:08.498 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.498 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.498 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.498 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.498 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.498 20:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.498 20:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.498 "name": "raid_bdev1", 00:20:08.498 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:08.498 "strip_size_kb": 0, 00:20:08.498 "state": "online", 00:20:08.498 "raid_level": "raid1", 00:20:08.498 "superblock": true, 00:20:08.498 "num_base_bdevs": 2, 00:20:08.498 "num_base_bdevs_discovered": 1, 00:20:08.498 "num_base_bdevs_operational": 1, 00:20:08.498 "base_bdevs_list": [ 00:20:08.498 { 00:20:08.498 "name": null, 00:20:08.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.498 "is_configured": false, 00:20:08.498 "data_offset": 0, 00:20:08.498 "data_size": 7936 00:20:08.498 }, 00:20:08.498 { 00:20:08.498 "name": "BaseBdev2", 00:20:08.498 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:08.498 "is_configured": true, 00:20:08.498 "data_offset": 256, 00:20:08.498 "data_size": 7936 00:20:08.498 } 00:20:08.498 ] 00:20:08.498 }' 00:20:08.498 20:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.498 20:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:08.498 20:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.498 20:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:08.498 20:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:08.498 20:16:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.498 20:16:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.498 [2024-10-17 20:16:54.118623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:08.498 [2024-10-17 20:16:54.134061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:08.498 20:16:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.498 20:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:08.498 [2024-10-17 20:16:54.136650] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:09.873 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.873 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.873 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.873 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.873 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.873 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.873 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.873 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.873 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.873 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.873 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.873 "name": "raid_bdev1", 00:20:09.873 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:09.873 "strip_size_kb": 0, 00:20:09.873 "state": "online", 00:20:09.873 "raid_level": "raid1", 00:20:09.873 "superblock": true, 00:20:09.873 "num_base_bdevs": 2, 00:20:09.873 "num_base_bdevs_discovered": 2, 00:20:09.873 "num_base_bdevs_operational": 2, 00:20:09.873 "process": { 00:20:09.873 "type": "rebuild", 00:20:09.873 "target": "spare", 00:20:09.873 "progress": { 00:20:09.873 "blocks": 2560, 00:20:09.873 "percent": 32 00:20:09.873 } 00:20:09.874 }, 00:20:09.874 "base_bdevs_list": [ 00:20:09.874 { 00:20:09.874 "name": "spare", 00:20:09.874 "uuid": "b52bfa4a-d05f-5723-aaaa-b668afbcec88", 00:20:09.874 "is_configured": true, 00:20:09.874 "data_offset": 256, 00:20:09.874 "data_size": 7936 00:20:09.874 }, 00:20:09.874 { 00:20:09.874 "name": "BaseBdev2", 00:20:09.874 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:09.874 "is_configured": true, 00:20:09.874 "data_offset": 256, 00:20:09.874 "data_size": 7936 00:20:09.874 } 00:20:09.874 ] 00:20:09.874 }' 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:09.874 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=730 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.874 "name": "raid_bdev1", 00:20:09.874 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:09.874 "strip_size_kb": 0, 00:20:09.874 "state": "online", 00:20:09.874 "raid_level": "raid1", 00:20:09.874 "superblock": true, 00:20:09.874 "num_base_bdevs": 2, 00:20:09.874 "num_base_bdevs_discovered": 2, 00:20:09.874 "num_base_bdevs_operational": 2, 00:20:09.874 "process": { 00:20:09.874 "type": "rebuild", 00:20:09.874 "target": "spare", 00:20:09.874 "progress": { 00:20:09.874 "blocks": 2816, 00:20:09.874 "percent": 35 00:20:09.874 } 00:20:09.874 }, 00:20:09.874 "base_bdevs_list": [ 00:20:09.874 { 00:20:09.874 "name": "spare", 00:20:09.874 "uuid": "b52bfa4a-d05f-5723-aaaa-b668afbcec88", 00:20:09.874 "is_configured": true, 00:20:09.874 "data_offset": 256, 00:20:09.874 "data_size": 7936 00:20:09.874 }, 00:20:09.874 { 00:20:09.874 "name": "BaseBdev2", 00:20:09.874 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:09.874 "is_configured": true, 00:20:09.874 "data_offset": 256, 00:20:09.874 "data_size": 7936 00:20:09.874 } 00:20:09.874 ] 00:20:09.874 }' 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.874 20:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:11.250 "name": "raid_bdev1", 00:20:11.250 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:11.250 "strip_size_kb": 0, 00:20:11.250 "state": "online", 00:20:11.250 "raid_level": "raid1", 00:20:11.250 "superblock": true, 00:20:11.250 "num_base_bdevs": 2, 00:20:11.250 "num_base_bdevs_discovered": 2, 00:20:11.250 "num_base_bdevs_operational": 2, 00:20:11.250 "process": { 00:20:11.250 "type": "rebuild", 00:20:11.250 "target": "spare", 00:20:11.250 "progress": { 00:20:11.250 "blocks": 5888, 00:20:11.250 "percent": 74 00:20:11.250 } 00:20:11.250 }, 00:20:11.250 "base_bdevs_list": [ 00:20:11.250 { 00:20:11.250 "name": "spare", 00:20:11.250 "uuid": "b52bfa4a-d05f-5723-aaaa-b668afbcec88", 00:20:11.250 "is_configured": true, 00:20:11.250 "data_offset": 256, 00:20:11.250 "data_size": 7936 00:20:11.250 }, 00:20:11.250 { 00:20:11.250 "name": "BaseBdev2", 00:20:11.250 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:11.250 "is_configured": true, 00:20:11.250 "data_offset": 256, 00:20:11.250 "data_size": 7936 00:20:11.250 } 00:20:11.250 ] 00:20:11.250 }' 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:11.250 20:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:11.816 [2024-10-17 20:16:57.259214] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:11.816 [2024-10-17 20:16:57.259580] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:11.816 [2024-10-17 20:16:57.259742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.100 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:12.100 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.100 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.100 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:12.100 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:12.100 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.100 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.100 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.100 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.100 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.100 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.100 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.100 "name": "raid_bdev1", 00:20:12.100 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:12.100 "strip_size_kb": 0, 00:20:12.100 "state": "online", 00:20:12.100 "raid_level": "raid1", 00:20:12.100 "superblock": true, 00:20:12.100 "num_base_bdevs": 2, 00:20:12.100 "num_base_bdevs_discovered": 2, 00:20:12.100 "num_base_bdevs_operational": 2, 00:20:12.100 "base_bdevs_list": [ 00:20:12.100 { 00:20:12.100 "name": "spare", 00:20:12.100 "uuid": "b52bfa4a-d05f-5723-aaaa-b668afbcec88", 00:20:12.100 "is_configured": true, 00:20:12.100 "data_offset": 256, 00:20:12.100 "data_size": 7936 00:20:12.100 }, 00:20:12.100 { 00:20:12.100 "name": "BaseBdev2", 00:20:12.100 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:12.100 "is_configured": true, 00:20:12.100 "data_offset": 256, 00:20:12.100 "data_size": 7936 00:20:12.100 } 00:20:12.100 ] 00:20:12.100 }' 00:20:12.100 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.365 "name": "raid_bdev1", 00:20:12.365 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:12.365 "strip_size_kb": 0, 00:20:12.365 "state": "online", 00:20:12.365 "raid_level": "raid1", 00:20:12.365 "superblock": true, 00:20:12.365 "num_base_bdevs": 2, 00:20:12.365 "num_base_bdevs_discovered": 2, 00:20:12.365 "num_base_bdevs_operational": 2, 00:20:12.365 "base_bdevs_list": [ 00:20:12.365 { 00:20:12.365 "name": "spare", 00:20:12.365 "uuid": "b52bfa4a-d05f-5723-aaaa-b668afbcec88", 00:20:12.365 "is_configured": true, 00:20:12.365 "data_offset": 256, 00:20:12.365 "data_size": 7936 00:20:12.365 }, 00:20:12.365 { 00:20:12.365 "name": "BaseBdev2", 00:20:12.365 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:12.365 "is_configured": true, 00:20:12.365 "data_offset": 256, 00:20:12.365 "data_size": 7936 00:20:12.365 } 00:20:12.365 ] 00:20:12.365 }' 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.365 20:16:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.625 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.625 "name": "raid_bdev1", 00:20:12.625 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:12.625 "strip_size_kb": 0, 00:20:12.625 "state": "online", 00:20:12.625 "raid_level": "raid1", 00:20:12.625 "superblock": true, 00:20:12.625 "num_base_bdevs": 2, 00:20:12.625 "num_base_bdevs_discovered": 2, 00:20:12.625 "num_base_bdevs_operational": 2, 00:20:12.625 "base_bdevs_list": [ 00:20:12.625 { 00:20:12.625 "name": "spare", 00:20:12.625 "uuid": "b52bfa4a-d05f-5723-aaaa-b668afbcec88", 00:20:12.625 "is_configured": true, 00:20:12.625 "data_offset": 256, 00:20:12.625 "data_size": 7936 00:20:12.625 }, 00:20:12.625 { 00:20:12.625 "name": "BaseBdev2", 00:20:12.625 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:12.625 "is_configured": true, 00:20:12.625 "data_offset": 256, 00:20:12.625 "data_size": 7936 00:20:12.625 } 00:20:12.625 ] 00:20:12.625 }' 00:20:12.625 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.625 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.884 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:12.884 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.884 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.884 [2024-10-17 20:16:58.495191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:12.884 [2024-10-17 20:16:58.495252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:12.884 [2024-10-17 20:16:58.495353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.884 [2024-10-17 20:16:58.495458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.884 [2024-10-17 20:16:58.495476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:12.884 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.885 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.885 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:20:12.885 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.885 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.885 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.144 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:13.144 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:13.144 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:13.144 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:13.144 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:13.144 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:13.144 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:13.144 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:13.144 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:13.144 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:13.144 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:13.144 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:13.144 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:13.403 /dev/nbd0 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:13.403 1+0 records in 00:20:13.403 1+0 records out 00:20:13.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593217 s, 6.9 MB/s 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:20:13.403 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.404 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:13.404 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:20:13.404 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:13.404 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:13.404 20:16:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:13.662 /dev/nbd1 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:13.662 1+0 records in 00:20:13.662 1+0 records out 00:20:13.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393311 s, 10.4 MB/s 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:13.662 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:13.920 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:13.920 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:13.920 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:13.920 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:13.920 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:13.920 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:13.920 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:14.177 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:14.177 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:14.177 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:14.177 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:14.177 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:14.177 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:14.177 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:14.177 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:14.177 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:14.177 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:14.435 [2024-10-17 20:16:59.953543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:14.435 [2024-10-17 20:16:59.953772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.435 [2024-10-17 20:16:59.953821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:14.435 [2024-10-17 20:16:59.953840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.435 [2024-10-17 20:16:59.956788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.435 [2024-10-17 20:16:59.956971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:14.435 [2024-10-17 20:16:59.957126] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:14.435 [2024-10-17 20:16:59.957205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:14.435 [2024-10-17 20:16:59.957395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:14.435 spare 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.435 20:16:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:14.435 [2024-10-17 20:17:00.057533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:14.435 [2024-10-17 20:17:00.057611] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:14.435 [2024-10-17 20:17:00.058086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:14.435 [2024-10-17 20:17:00.058359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:14.435 [2024-10-17 20:17:00.058381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:14.435 [2024-10-17 20:17:00.058670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.435 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.435 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:14.435 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.435 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.435 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:14.435 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:14.435 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:14.435 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.436 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.436 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.436 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.436 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.436 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.436 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.436 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:14.436 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.693 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.693 "name": "raid_bdev1", 00:20:14.693 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:14.693 "strip_size_kb": 0, 00:20:14.693 "state": "online", 00:20:14.693 "raid_level": "raid1", 00:20:14.693 "superblock": true, 00:20:14.693 "num_base_bdevs": 2, 00:20:14.693 "num_base_bdevs_discovered": 2, 00:20:14.693 "num_base_bdevs_operational": 2, 00:20:14.693 "base_bdevs_list": [ 00:20:14.693 { 00:20:14.693 "name": "spare", 00:20:14.693 "uuid": "b52bfa4a-d05f-5723-aaaa-b668afbcec88", 00:20:14.693 "is_configured": true, 00:20:14.693 "data_offset": 256, 00:20:14.693 "data_size": 7936 00:20:14.693 }, 00:20:14.693 { 00:20:14.693 "name": "BaseBdev2", 00:20:14.693 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:14.693 "is_configured": true, 00:20:14.693 "data_offset": 256, 00:20:14.693 "data_size": 7936 00:20:14.693 } 00:20:14.694 ] 00:20:14.694 }' 00:20:14.694 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.694 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:14.952 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:14.952 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.952 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:14.952 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:14.952 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.952 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.952 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.952 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:14.952 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.952 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.211 "name": "raid_bdev1", 00:20:15.211 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:15.211 "strip_size_kb": 0, 00:20:15.211 "state": "online", 00:20:15.211 "raid_level": "raid1", 00:20:15.211 "superblock": true, 00:20:15.211 "num_base_bdevs": 2, 00:20:15.211 "num_base_bdevs_discovered": 2, 00:20:15.211 "num_base_bdevs_operational": 2, 00:20:15.211 "base_bdevs_list": [ 00:20:15.211 { 00:20:15.211 "name": "spare", 00:20:15.211 "uuid": "b52bfa4a-d05f-5723-aaaa-b668afbcec88", 00:20:15.211 "is_configured": true, 00:20:15.211 "data_offset": 256, 00:20:15.211 "data_size": 7936 00:20:15.211 }, 00:20:15.211 { 00:20:15.211 "name": "BaseBdev2", 00:20:15.211 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:15.211 "is_configured": true, 00:20:15.211 "data_offset": 256, 00:20:15.211 "data_size": 7936 00:20:15.211 } 00:20:15.211 ] 00:20:15.211 }' 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.211 [2024-10-17 20:17:00.782795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.211 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.211 "name": "raid_bdev1", 00:20:15.211 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:15.211 "strip_size_kb": 0, 00:20:15.211 "state": "online", 00:20:15.211 "raid_level": "raid1", 00:20:15.211 "superblock": true, 00:20:15.211 "num_base_bdevs": 2, 00:20:15.211 "num_base_bdevs_discovered": 1, 00:20:15.211 "num_base_bdevs_operational": 1, 00:20:15.211 "base_bdevs_list": [ 00:20:15.211 { 00:20:15.211 "name": null, 00:20:15.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.211 "is_configured": false, 00:20:15.211 "data_offset": 0, 00:20:15.211 "data_size": 7936 00:20:15.211 }, 00:20:15.211 { 00:20:15.212 "name": "BaseBdev2", 00:20:15.212 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:15.212 "is_configured": true, 00:20:15.212 "data_offset": 256, 00:20:15.212 "data_size": 7936 00:20:15.212 } 00:20:15.212 ] 00:20:15.212 }' 00:20:15.212 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.212 20:17:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.779 20:17:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:15.779 20:17:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.779 20:17:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.779 [2024-10-17 20:17:01.306928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:15.779 [2024-10-17 20:17:01.307201] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:15.779 [2024-10-17 20:17:01.307233] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:15.779 [2024-10-17 20:17:01.307287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:15.779 [2024-10-17 20:17:01.322946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:15.779 20:17:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.779 20:17:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:15.779 [2024-10-17 20:17:01.325669] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:16.715 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.715 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.715 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.715 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.715 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.715 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.715 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.715 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.715 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.715 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.974 "name": "raid_bdev1", 00:20:16.974 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:16.974 "strip_size_kb": 0, 00:20:16.974 "state": "online", 00:20:16.974 "raid_level": "raid1", 00:20:16.974 "superblock": true, 00:20:16.974 "num_base_bdevs": 2, 00:20:16.974 "num_base_bdevs_discovered": 2, 00:20:16.974 "num_base_bdevs_operational": 2, 00:20:16.974 "process": { 00:20:16.974 "type": "rebuild", 00:20:16.974 "target": "spare", 00:20:16.974 "progress": { 00:20:16.974 "blocks": 2560, 00:20:16.974 "percent": 32 00:20:16.974 } 00:20:16.974 }, 00:20:16.974 "base_bdevs_list": [ 00:20:16.974 { 00:20:16.974 "name": "spare", 00:20:16.974 "uuid": "b52bfa4a-d05f-5723-aaaa-b668afbcec88", 00:20:16.974 "is_configured": true, 00:20:16.974 "data_offset": 256, 00:20:16.974 "data_size": 7936 00:20:16.974 }, 00:20:16.974 { 00:20:16.974 "name": "BaseBdev2", 00:20:16.974 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:16.974 "is_configured": true, 00:20:16.974 "data_offset": 256, 00:20:16.974 "data_size": 7936 00:20:16.974 } 00:20:16.974 ] 00:20:16.974 }' 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.974 [2024-10-17 20:17:02.494940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:16.974 [2024-10-17 20:17:02.534839] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:16.974 [2024-10-17 20:17:02.534948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.974 [2024-10-17 20:17:02.534974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:16.974 [2024-10-17 20:17:02.534988] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.974 "name": "raid_bdev1", 00:20:16.974 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:16.974 "strip_size_kb": 0, 00:20:16.974 "state": "online", 00:20:16.974 "raid_level": "raid1", 00:20:16.974 "superblock": true, 00:20:16.974 "num_base_bdevs": 2, 00:20:16.974 "num_base_bdevs_discovered": 1, 00:20:16.974 "num_base_bdevs_operational": 1, 00:20:16.974 "base_bdevs_list": [ 00:20:16.974 { 00:20:16.974 "name": null, 00:20:16.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.974 "is_configured": false, 00:20:16.974 "data_offset": 0, 00:20:16.974 "data_size": 7936 00:20:16.974 }, 00:20:16.974 { 00:20:16.974 "name": "BaseBdev2", 00:20:16.974 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:16.974 "is_configured": true, 00:20:16.974 "data_offset": 256, 00:20:16.974 "data_size": 7936 00:20:16.974 } 00:20:16.974 ] 00:20:16.974 }' 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.974 20:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.541 20:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:17.541 20:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.541 20:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.541 [2024-10-17 20:17:03.074574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:17.541 [2024-10-17 20:17:03.074902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.541 [2024-10-17 20:17:03.074947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:17.541 [2024-10-17 20:17:03.074970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.541 [2024-10-17 20:17:03.075633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.541 [2024-10-17 20:17:03.075673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:17.541 [2024-10-17 20:17:03.075799] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:17.541 [2024-10-17 20:17:03.075823] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:17.541 [2024-10-17 20:17:03.075838] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:17.541 [2024-10-17 20:17:03.075876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:17.541 [2024-10-17 20:17:03.091270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:17.541 spare 00:20:17.541 20:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.541 20:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:17.541 [2024-10-17 20:17:03.093831] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:18.479 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.479 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.479 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.479 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.479 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.479 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.479 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.479 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.479 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:18.479 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.737 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.737 "name": "raid_bdev1", 00:20:18.737 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:18.737 "strip_size_kb": 0, 00:20:18.737 "state": "online", 00:20:18.737 "raid_level": "raid1", 00:20:18.737 "superblock": true, 00:20:18.737 "num_base_bdevs": 2, 00:20:18.737 "num_base_bdevs_discovered": 2, 00:20:18.737 "num_base_bdevs_operational": 2, 00:20:18.737 "process": { 00:20:18.737 "type": "rebuild", 00:20:18.737 "target": "spare", 00:20:18.737 "progress": { 00:20:18.737 "blocks": 2560, 00:20:18.737 "percent": 32 00:20:18.737 } 00:20:18.737 }, 00:20:18.737 "base_bdevs_list": [ 00:20:18.737 { 00:20:18.737 "name": "spare", 00:20:18.737 "uuid": "b52bfa4a-d05f-5723-aaaa-b668afbcec88", 00:20:18.737 "is_configured": true, 00:20:18.737 "data_offset": 256, 00:20:18.737 "data_size": 7936 00:20:18.737 }, 00:20:18.737 { 00:20:18.737 "name": "BaseBdev2", 00:20:18.737 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:18.737 "is_configured": true, 00:20:18.737 "data_offset": 256, 00:20:18.737 "data_size": 7936 00:20:18.737 } 00:20:18.737 ] 00:20:18.737 }' 00:20:18.737 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.737 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.737 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.737 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.737 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:18.737 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.737 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:18.737 [2024-10-17 20:17:04.275243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:18.737 [2024-10-17 20:17:04.303152] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:18.737 [2024-10-17 20:17:04.303239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.737 [2024-10-17 20:17:04.303268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:18.737 [2024-10-17 20:17:04.303280] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:18.737 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.737 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:18.737 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.738 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.738 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.738 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.738 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:18.738 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.738 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.738 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.738 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.738 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.738 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.738 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:18.738 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.738 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.996 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.996 "name": "raid_bdev1", 00:20:18.996 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:18.996 "strip_size_kb": 0, 00:20:18.996 "state": "online", 00:20:18.996 "raid_level": "raid1", 00:20:18.996 "superblock": true, 00:20:18.996 "num_base_bdevs": 2, 00:20:18.996 "num_base_bdevs_discovered": 1, 00:20:18.996 "num_base_bdevs_operational": 1, 00:20:18.996 "base_bdevs_list": [ 00:20:18.996 { 00:20:18.996 "name": null, 00:20:18.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.996 "is_configured": false, 00:20:18.996 "data_offset": 0, 00:20:18.996 "data_size": 7936 00:20:18.996 }, 00:20:18.996 { 00:20:18.996 "name": "BaseBdev2", 00:20:18.996 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:18.996 "is_configured": true, 00:20:18.996 "data_offset": 256, 00:20:18.996 "data_size": 7936 00:20:18.996 } 00:20:18.996 ] 00:20:18.996 }' 00:20:18.996 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.996 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.254 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.254 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.254 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.254 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.254 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.254 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.254 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.254 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.255 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.255 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.513 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.513 "name": "raid_bdev1", 00:20:19.513 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:19.513 "strip_size_kb": 0, 00:20:19.513 "state": "online", 00:20:19.513 "raid_level": "raid1", 00:20:19.513 "superblock": true, 00:20:19.513 "num_base_bdevs": 2, 00:20:19.513 "num_base_bdevs_discovered": 1, 00:20:19.513 "num_base_bdevs_operational": 1, 00:20:19.513 "base_bdevs_list": [ 00:20:19.513 { 00:20:19.513 "name": null, 00:20:19.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.513 "is_configured": false, 00:20:19.513 "data_offset": 0, 00:20:19.513 "data_size": 7936 00:20:19.513 }, 00:20:19.513 { 00:20:19.513 "name": "BaseBdev2", 00:20:19.513 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:19.513 "is_configured": true, 00:20:19.513 "data_offset": 256, 00:20:19.513 "data_size": 7936 00:20:19.513 } 00:20:19.513 ] 00:20:19.513 }' 00:20:19.513 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.513 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:19.513 20:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.513 20:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.513 20:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:19.513 20:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.513 20:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.513 20:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.513 20:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:19.513 20:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.513 20:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.513 [2024-10-17 20:17:05.031352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:19.513 [2024-10-17 20:17:05.031438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.513 [2024-10-17 20:17:05.031476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:19.513 [2024-10-17 20:17:05.031505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.513 [2024-10-17 20:17:05.032130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.513 [2024-10-17 20:17:05.032163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:19.513 [2024-10-17 20:17:05.032293] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:19.513 [2024-10-17 20:17:05.032315] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:19.513 [2024-10-17 20:17:05.032330] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:19.513 [2024-10-17 20:17:05.032345] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:19.513 BaseBdev1 00:20:19.513 20:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.513 20:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.448 "name": "raid_bdev1", 00:20:20.448 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:20.448 "strip_size_kb": 0, 00:20:20.448 "state": "online", 00:20:20.448 "raid_level": "raid1", 00:20:20.448 "superblock": true, 00:20:20.448 "num_base_bdevs": 2, 00:20:20.448 "num_base_bdevs_discovered": 1, 00:20:20.448 "num_base_bdevs_operational": 1, 00:20:20.448 "base_bdevs_list": [ 00:20:20.448 { 00:20:20.448 "name": null, 00:20:20.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.448 "is_configured": false, 00:20:20.448 "data_offset": 0, 00:20:20.448 "data_size": 7936 00:20:20.448 }, 00:20:20.448 { 00:20:20.448 "name": "BaseBdev2", 00:20:20.448 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:20.448 "is_configured": true, 00:20:20.448 "data_offset": 256, 00:20:20.448 "data_size": 7936 00:20:20.448 } 00:20:20.448 ] 00:20:20.448 }' 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.448 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.015 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.015 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.015 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:21.015 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:21.015 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.015 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.015 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.015 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.015 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.015 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.015 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.015 "name": "raid_bdev1", 00:20:21.015 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:21.015 "strip_size_kb": 0, 00:20:21.015 "state": "online", 00:20:21.015 "raid_level": "raid1", 00:20:21.015 "superblock": true, 00:20:21.015 "num_base_bdevs": 2, 00:20:21.015 "num_base_bdevs_discovered": 1, 00:20:21.015 "num_base_bdevs_operational": 1, 00:20:21.015 "base_bdevs_list": [ 00:20:21.015 { 00:20:21.015 "name": null, 00:20:21.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.015 "is_configured": false, 00:20:21.015 "data_offset": 0, 00:20:21.015 "data_size": 7936 00:20:21.015 }, 00:20:21.015 { 00:20:21.015 "name": "BaseBdev2", 00:20:21.015 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:21.015 "is_configured": true, 00:20:21.015 "data_offset": 256, 00:20:21.015 "data_size": 7936 00:20:21.015 } 00:20:21.015 ] 00:20:21.015 }' 00:20:21.016 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.274 [2024-10-17 20:17:06.747879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:21.274 [2024-10-17 20:17:06.748120] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:21.274 [2024-10-17 20:17:06.748145] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:21.274 request: 00:20:21.274 { 00:20:21.274 "base_bdev": "BaseBdev1", 00:20:21.274 "raid_bdev": "raid_bdev1", 00:20:21.274 "method": "bdev_raid_add_base_bdev", 00:20:21.274 "req_id": 1 00:20:21.274 } 00:20:21.274 Got JSON-RPC error response 00:20:21.274 response: 00:20:21.274 { 00:20:21.274 "code": -22, 00:20:21.274 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:21.274 } 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:21.274 20:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.210 "name": "raid_bdev1", 00:20:22.210 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:22.210 "strip_size_kb": 0, 00:20:22.210 "state": "online", 00:20:22.210 "raid_level": "raid1", 00:20:22.210 "superblock": true, 00:20:22.210 "num_base_bdevs": 2, 00:20:22.210 "num_base_bdevs_discovered": 1, 00:20:22.210 "num_base_bdevs_operational": 1, 00:20:22.210 "base_bdevs_list": [ 00:20:22.210 { 00:20:22.210 "name": null, 00:20:22.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.210 "is_configured": false, 00:20:22.210 "data_offset": 0, 00:20:22.210 "data_size": 7936 00:20:22.210 }, 00:20:22.210 { 00:20:22.210 "name": "BaseBdev2", 00:20:22.210 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:22.210 "is_configured": true, 00:20:22.210 "data_offset": 256, 00:20:22.210 "data_size": 7936 00:20:22.210 } 00:20:22.210 ] 00:20:22.210 }' 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.210 20:17:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.785 "name": "raid_bdev1", 00:20:22.785 "uuid": "ed742f3a-90be-4b5e-8a90-63e4aa72385a", 00:20:22.785 "strip_size_kb": 0, 00:20:22.785 "state": "online", 00:20:22.785 "raid_level": "raid1", 00:20:22.785 "superblock": true, 00:20:22.785 "num_base_bdevs": 2, 00:20:22.785 "num_base_bdevs_discovered": 1, 00:20:22.785 "num_base_bdevs_operational": 1, 00:20:22.785 "base_bdevs_list": [ 00:20:22.785 { 00:20:22.785 "name": null, 00:20:22.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.785 "is_configured": false, 00:20:22.785 "data_offset": 0, 00:20:22.785 "data_size": 7936 00:20:22.785 }, 00:20:22.785 { 00:20:22.785 "name": "BaseBdev2", 00:20:22.785 "uuid": "1c38fe9f-8b92-573a-9adc-17117fd04b0a", 00:20:22.785 "is_configured": true, 00:20:22.785 "data_offset": 256, 00:20:22.785 "data_size": 7936 00:20:22.785 } 00:20:22.785 ] 00:20:22.785 }' 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86818 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86818 ']' 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86818 00:20:22.785 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:20:23.044 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:23.044 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86818 00:20:23.044 killing process with pid 86818 00:20:23.044 Received shutdown signal, test time was about 60.000000 seconds 00:20:23.044 00:20:23.044 Latency(us) 00:20:23.044 [2024-10-17T20:17:08.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.044 [2024-10-17T20:17:08.698Z] =================================================================================================================== 00:20:23.044 [2024-10-17T20:17:08.698Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:23.044 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:23.044 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:23.044 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86818' 00:20:23.044 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86818 00:20:23.044 [2024-10-17 20:17:08.466134] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:23.044 20:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86818 00:20:23.044 [2024-10-17 20:17:08.466291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.044 [2024-10-17 20:17:08.466360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.044 [2024-10-17 20:17:08.466380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:23.303 [2024-10-17 20:17:08.737318] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:24.240 20:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:20:24.240 00:20:24.240 real 0m21.603s 00:20:24.240 user 0m29.228s 00:20:24.240 sys 0m2.422s 00:20:24.240 ************************************ 00:20:24.240 END TEST raid_rebuild_test_sb_4k 00:20:24.240 ************************************ 00:20:24.240 20:17:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:24.240 20:17:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.240 20:17:09 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:20:24.240 20:17:09 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:20:24.240 20:17:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:24.240 20:17:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:24.240 20:17:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:24.240 ************************************ 00:20:24.240 START TEST raid_state_function_test_sb_md_separate 00:20:24.240 ************************************ 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:24.240 Process raid pid: 87521 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87521 00:20:24.240 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87521' 00:20:24.241 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87521 00:20:24.241 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:24.241 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87521 ']' 00:20:24.241 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.241 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.241 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.241 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.241 20:17:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.500 [2024-10-17 20:17:09.974947] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:20:24.500 [2024-10-17 20:17:09.975375] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.760 [2024-10-17 20:17:10.153134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.760 [2024-10-17 20:17:10.311312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.019 [2024-10-17 20:17:10.534790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:25.019 [2024-10-17 20:17:10.534845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:25.587 20:17:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:25.587 20:17:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:20:25.587 20:17:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:25.587 20:17:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.587 20:17:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.587 [2024-10-17 20:17:11.006366] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:25.587 [2024-10-17 20:17:11.006433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:25.587 [2024-10-17 20:17:11.006450] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:25.587 [2024-10-17 20:17:11.006467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.587 "name": "Existed_Raid", 00:20:25.587 "uuid": "d6b9ccb4-c22f-464c-8adc-600b5208a33a", 00:20:25.587 "strip_size_kb": 0, 00:20:25.587 "state": "configuring", 00:20:25.587 "raid_level": "raid1", 00:20:25.587 "superblock": true, 00:20:25.587 "num_base_bdevs": 2, 00:20:25.587 "num_base_bdevs_discovered": 0, 00:20:25.587 "num_base_bdevs_operational": 2, 00:20:25.587 "base_bdevs_list": [ 00:20:25.587 { 00:20:25.587 "name": "BaseBdev1", 00:20:25.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.587 "is_configured": false, 00:20:25.587 "data_offset": 0, 00:20:25.587 "data_size": 0 00:20:25.587 }, 00:20:25.587 { 00:20:25.587 "name": "BaseBdev2", 00:20:25.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.587 "is_configured": false, 00:20:25.587 "data_offset": 0, 00:20:25.587 "data_size": 0 00:20:25.587 } 00:20:25.587 ] 00:20:25.587 }' 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.587 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.154 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:26.154 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.154 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.154 [2024-10-17 20:17:11.510466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:26.154 [2024-10-17 20:17:11.510528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:26.154 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.154 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:26.154 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.154 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.154 [2024-10-17 20:17:11.518487] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:26.154 [2024-10-17 20:17:11.518554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:26.154 [2024-10-17 20:17:11.518573] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:26.154 [2024-10-17 20:17:11.518598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:26.154 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.154 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:20:26.154 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.154 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.154 [2024-10-17 20:17:11.572467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:26.154 BaseBdev1 00:20:26.154 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.154 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.155 [ 00:20:26.155 { 00:20:26.155 "name": "BaseBdev1", 00:20:26.155 "aliases": [ 00:20:26.155 "d2cc67e4-925e-4313-9623-0774fc177cf6" 00:20:26.155 ], 00:20:26.155 "product_name": "Malloc disk", 00:20:26.155 "block_size": 4096, 00:20:26.155 "num_blocks": 8192, 00:20:26.155 "uuid": "d2cc67e4-925e-4313-9623-0774fc177cf6", 00:20:26.155 "md_size": 32, 00:20:26.155 "md_interleave": false, 00:20:26.155 "dif_type": 0, 00:20:26.155 "assigned_rate_limits": { 00:20:26.155 "rw_ios_per_sec": 0, 00:20:26.155 "rw_mbytes_per_sec": 0, 00:20:26.155 "r_mbytes_per_sec": 0, 00:20:26.155 "w_mbytes_per_sec": 0 00:20:26.155 }, 00:20:26.155 "claimed": true, 00:20:26.155 "claim_type": "exclusive_write", 00:20:26.155 "zoned": false, 00:20:26.155 "supported_io_types": { 00:20:26.155 "read": true, 00:20:26.155 "write": true, 00:20:26.155 "unmap": true, 00:20:26.155 "flush": true, 00:20:26.155 "reset": true, 00:20:26.155 "nvme_admin": false, 00:20:26.155 "nvme_io": false, 00:20:26.155 "nvme_io_md": false, 00:20:26.155 "write_zeroes": true, 00:20:26.155 "zcopy": true, 00:20:26.155 "get_zone_info": false, 00:20:26.155 "zone_management": false, 00:20:26.155 "zone_append": false, 00:20:26.155 "compare": false, 00:20:26.155 "compare_and_write": false, 00:20:26.155 "abort": true, 00:20:26.155 "seek_hole": false, 00:20:26.155 "seek_data": false, 00:20:26.155 "copy": true, 00:20:26.155 "nvme_iov_md": false 00:20:26.155 }, 00:20:26.155 "memory_domains": [ 00:20:26.155 { 00:20:26.155 "dma_device_id": "system", 00:20:26.155 "dma_device_type": 1 00:20:26.155 }, 00:20:26.155 { 00:20:26.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.155 "dma_device_type": 2 00:20:26.155 } 00:20:26.155 ], 00:20:26.155 "driver_specific": {} 00:20:26.155 } 00:20:26.155 ] 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.155 "name": "Existed_Raid", 00:20:26.155 "uuid": "f80f6f92-982d-4718-a380-d34f8cce8f37", 00:20:26.155 "strip_size_kb": 0, 00:20:26.155 "state": "configuring", 00:20:26.155 "raid_level": "raid1", 00:20:26.155 "superblock": true, 00:20:26.155 "num_base_bdevs": 2, 00:20:26.155 "num_base_bdevs_discovered": 1, 00:20:26.155 "num_base_bdevs_operational": 2, 00:20:26.155 "base_bdevs_list": [ 00:20:26.155 { 00:20:26.155 "name": "BaseBdev1", 00:20:26.155 "uuid": "d2cc67e4-925e-4313-9623-0774fc177cf6", 00:20:26.155 "is_configured": true, 00:20:26.155 "data_offset": 256, 00:20:26.155 "data_size": 7936 00:20:26.155 }, 00:20:26.155 { 00:20:26.155 "name": "BaseBdev2", 00:20:26.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.155 "is_configured": false, 00:20:26.155 "data_offset": 0, 00:20:26.155 "data_size": 0 00:20:26.155 } 00:20:26.155 ] 00:20:26.155 }' 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.155 20:17:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.722 [2024-10-17 20:17:12.100713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:26.722 [2024-10-17 20:17:12.100787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.722 [2024-10-17 20:17:12.108726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:26.722 [2024-10-17 20:17:12.111306] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:26.722 [2024-10-17 20:17:12.111484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.722 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.722 "name": "Existed_Raid", 00:20:26.722 "uuid": "72e19c78-26a7-4709-977a-4ee64f8b4324", 00:20:26.722 "strip_size_kb": 0, 00:20:26.722 "state": "configuring", 00:20:26.722 "raid_level": "raid1", 00:20:26.722 "superblock": true, 00:20:26.722 "num_base_bdevs": 2, 00:20:26.722 "num_base_bdevs_discovered": 1, 00:20:26.722 "num_base_bdevs_operational": 2, 00:20:26.722 "base_bdevs_list": [ 00:20:26.722 { 00:20:26.722 "name": "BaseBdev1", 00:20:26.722 "uuid": "d2cc67e4-925e-4313-9623-0774fc177cf6", 00:20:26.722 "is_configured": true, 00:20:26.722 "data_offset": 256, 00:20:26.722 "data_size": 7936 00:20:26.722 }, 00:20:26.722 { 00:20:26.722 "name": "BaseBdev2", 00:20:26.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.722 "is_configured": false, 00:20:26.723 "data_offset": 0, 00:20:26.723 "data_size": 0 00:20:26.723 } 00:20:26.723 ] 00:20:26.723 }' 00:20:26.723 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.723 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.981 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:20:26.981 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.981 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:27.255 [2024-10-17 20:17:12.648595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.255 [2024-10-17 20:17:12.649222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:27.256 [2024-10-17 20:17:12.649249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:27.256 [2024-10-17 20:17:12.649354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:27.256 [2024-10-17 20:17:12.649538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:27.256 [2024-10-17 20:17:12.649562] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:27.256 BaseBdev2 00:20:27.256 [2024-10-17 20:17:12.649677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:27.256 [ 00:20:27.256 { 00:20:27.256 "name": "BaseBdev2", 00:20:27.256 "aliases": [ 00:20:27.256 "5f183beb-41ce-41b1-a3cf-ddd57325d8ed" 00:20:27.256 ], 00:20:27.256 "product_name": "Malloc disk", 00:20:27.256 "block_size": 4096, 00:20:27.256 "num_blocks": 8192, 00:20:27.256 "uuid": "5f183beb-41ce-41b1-a3cf-ddd57325d8ed", 00:20:27.256 "md_size": 32, 00:20:27.256 "md_interleave": false, 00:20:27.256 "dif_type": 0, 00:20:27.256 "assigned_rate_limits": { 00:20:27.256 "rw_ios_per_sec": 0, 00:20:27.256 "rw_mbytes_per_sec": 0, 00:20:27.256 "r_mbytes_per_sec": 0, 00:20:27.256 "w_mbytes_per_sec": 0 00:20:27.256 }, 00:20:27.256 "claimed": true, 00:20:27.256 "claim_type": "exclusive_write", 00:20:27.256 "zoned": false, 00:20:27.256 "supported_io_types": { 00:20:27.256 "read": true, 00:20:27.256 "write": true, 00:20:27.256 "unmap": true, 00:20:27.256 "flush": true, 00:20:27.256 "reset": true, 00:20:27.256 "nvme_admin": false, 00:20:27.256 "nvme_io": false, 00:20:27.256 "nvme_io_md": false, 00:20:27.256 "write_zeroes": true, 00:20:27.256 "zcopy": true, 00:20:27.256 "get_zone_info": false, 00:20:27.256 "zone_management": false, 00:20:27.256 "zone_append": false, 00:20:27.256 "compare": false, 00:20:27.256 "compare_and_write": false, 00:20:27.256 "abort": true, 00:20:27.256 "seek_hole": false, 00:20:27.256 "seek_data": false, 00:20:27.256 "copy": true, 00:20:27.256 "nvme_iov_md": false 00:20:27.256 }, 00:20:27.256 "memory_domains": [ 00:20:27.256 { 00:20:27.256 "dma_device_id": "system", 00:20:27.256 "dma_device_type": 1 00:20:27.256 }, 00:20:27.256 { 00:20:27.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.256 "dma_device_type": 2 00:20:27.256 } 00:20:27.256 ], 00:20:27.256 "driver_specific": {} 00:20:27.256 } 00:20:27.256 ] 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.256 "name": "Existed_Raid", 00:20:27.256 "uuid": "72e19c78-26a7-4709-977a-4ee64f8b4324", 00:20:27.256 "strip_size_kb": 0, 00:20:27.256 "state": "online", 00:20:27.256 "raid_level": "raid1", 00:20:27.256 "superblock": true, 00:20:27.256 "num_base_bdevs": 2, 00:20:27.256 "num_base_bdevs_discovered": 2, 00:20:27.256 "num_base_bdevs_operational": 2, 00:20:27.256 "base_bdevs_list": [ 00:20:27.256 { 00:20:27.256 "name": "BaseBdev1", 00:20:27.256 "uuid": "d2cc67e4-925e-4313-9623-0774fc177cf6", 00:20:27.256 "is_configured": true, 00:20:27.256 "data_offset": 256, 00:20:27.256 "data_size": 7936 00:20:27.256 }, 00:20:27.256 { 00:20:27.256 "name": "BaseBdev2", 00:20:27.256 "uuid": "5f183beb-41ce-41b1-a3cf-ddd57325d8ed", 00:20:27.256 "is_configured": true, 00:20:27.256 "data_offset": 256, 00:20:27.256 "data_size": 7936 00:20:27.256 } 00:20:27.256 ] 00:20:27.256 }' 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.256 20:17:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:27.823 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:27.823 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:27.823 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:27.823 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:27.823 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:27.823 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:27.823 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:27.823 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:27.823 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.823 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:27.823 [2024-10-17 20:17:13.209239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:27.823 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.823 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:27.823 "name": "Existed_Raid", 00:20:27.823 "aliases": [ 00:20:27.823 "72e19c78-26a7-4709-977a-4ee64f8b4324" 00:20:27.823 ], 00:20:27.823 "product_name": "Raid Volume", 00:20:27.823 "block_size": 4096, 00:20:27.823 "num_blocks": 7936, 00:20:27.823 "uuid": "72e19c78-26a7-4709-977a-4ee64f8b4324", 00:20:27.823 "md_size": 32, 00:20:27.823 "md_interleave": false, 00:20:27.823 "dif_type": 0, 00:20:27.823 "assigned_rate_limits": { 00:20:27.823 "rw_ios_per_sec": 0, 00:20:27.823 "rw_mbytes_per_sec": 0, 00:20:27.823 "r_mbytes_per_sec": 0, 00:20:27.823 "w_mbytes_per_sec": 0 00:20:27.823 }, 00:20:27.823 "claimed": false, 00:20:27.823 "zoned": false, 00:20:27.823 "supported_io_types": { 00:20:27.823 "read": true, 00:20:27.823 "write": true, 00:20:27.823 "unmap": false, 00:20:27.823 "flush": false, 00:20:27.823 "reset": true, 00:20:27.823 "nvme_admin": false, 00:20:27.823 "nvme_io": false, 00:20:27.823 "nvme_io_md": false, 00:20:27.823 "write_zeroes": true, 00:20:27.823 "zcopy": false, 00:20:27.823 "get_zone_info": false, 00:20:27.823 "zone_management": false, 00:20:27.823 "zone_append": false, 00:20:27.823 "compare": false, 00:20:27.823 "compare_and_write": false, 00:20:27.823 "abort": false, 00:20:27.823 "seek_hole": false, 00:20:27.823 "seek_data": false, 00:20:27.823 "copy": false, 00:20:27.823 "nvme_iov_md": false 00:20:27.823 }, 00:20:27.823 "memory_domains": [ 00:20:27.823 { 00:20:27.823 "dma_device_id": "system", 00:20:27.823 "dma_device_type": 1 00:20:27.823 }, 00:20:27.823 { 00:20:27.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.823 "dma_device_type": 2 00:20:27.823 }, 00:20:27.823 { 00:20:27.823 "dma_device_id": "system", 00:20:27.823 "dma_device_type": 1 00:20:27.823 }, 00:20:27.823 { 00:20:27.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.823 "dma_device_type": 2 00:20:27.823 } 00:20:27.823 ], 00:20:27.823 "driver_specific": { 00:20:27.823 "raid": { 00:20:27.823 "uuid": "72e19c78-26a7-4709-977a-4ee64f8b4324", 00:20:27.823 "strip_size_kb": 0, 00:20:27.823 "state": "online", 00:20:27.823 "raid_level": "raid1", 00:20:27.823 "superblock": true, 00:20:27.823 "num_base_bdevs": 2, 00:20:27.823 "num_base_bdevs_discovered": 2, 00:20:27.823 "num_base_bdevs_operational": 2, 00:20:27.823 "base_bdevs_list": [ 00:20:27.823 { 00:20:27.823 "name": "BaseBdev1", 00:20:27.823 "uuid": "d2cc67e4-925e-4313-9623-0774fc177cf6", 00:20:27.823 "is_configured": true, 00:20:27.823 "data_offset": 256, 00:20:27.823 "data_size": 7936 00:20:27.823 }, 00:20:27.823 { 00:20:27.824 "name": "BaseBdev2", 00:20:27.824 "uuid": "5f183beb-41ce-41b1-a3cf-ddd57325d8ed", 00:20:27.824 "is_configured": true, 00:20:27.824 "data_offset": 256, 00:20:27.824 "data_size": 7936 00:20:27.824 } 00:20:27.824 ] 00:20:27.824 } 00:20:27.824 } 00:20:27.824 }' 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:27.824 BaseBdev2' 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.824 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:27.824 [2024-10-17 20:17:13.464934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.083 "name": "Existed_Raid", 00:20:28.083 "uuid": "72e19c78-26a7-4709-977a-4ee64f8b4324", 00:20:28.083 "strip_size_kb": 0, 00:20:28.083 "state": "online", 00:20:28.083 "raid_level": "raid1", 00:20:28.083 "superblock": true, 00:20:28.083 "num_base_bdevs": 2, 00:20:28.083 "num_base_bdevs_discovered": 1, 00:20:28.083 "num_base_bdevs_operational": 1, 00:20:28.083 "base_bdevs_list": [ 00:20:28.083 { 00:20:28.083 "name": null, 00:20:28.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.083 "is_configured": false, 00:20:28.083 "data_offset": 0, 00:20:28.083 "data_size": 7936 00:20:28.083 }, 00:20:28.083 { 00:20:28.083 "name": "BaseBdev2", 00:20:28.083 "uuid": "5f183beb-41ce-41b1-a3cf-ddd57325d8ed", 00:20:28.083 "is_configured": true, 00:20:28.083 "data_offset": 256, 00:20:28.083 "data_size": 7936 00:20:28.083 } 00:20:28.083 ] 00:20:28.083 }' 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.083 20:17:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.649 [2024-10-17 20:17:14.123683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:28.649 [2024-10-17 20:17:14.123818] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:28.649 [2024-10-17 20:17:14.213554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.649 [2024-10-17 20:17:14.213625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.649 [2024-10-17 20:17:14.213646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87521 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87521 ']' 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87521 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:28.649 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87521 00:20:28.908 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:28.908 killing process with pid 87521 00:20:28.908 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:28.908 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87521' 00:20:28.908 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87521 00:20:28.908 [2024-10-17 20:17:14.306916] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:28.908 20:17:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87521 00:20:28.908 [2024-10-17 20:17:14.321850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:29.842 20:17:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:20:29.842 00:20:29.842 real 0m5.472s 00:20:29.842 user 0m8.259s 00:20:29.842 sys 0m0.812s 00:20:29.842 ************************************ 00:20:29.842 END TEST raid_state_function_test_sb_md_separate 00:20:29.842 ************************************ 00:20:29.842 20:17:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:29.842 20:17:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.842 20:17:15 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:20:29.842 20:17:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:29.843 20:17:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:29.843 20:17:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:29.843 ************************************ 00:20:29.843 START TEST raid_superblock_test_md_separate 00:20:29.843 ************************************ 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87773 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87773 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87773 ']' 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:29.843 20:17:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.843 [2024-10-17 20:17:15.492597] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:20:29.843 [2024-10-17 20:17:15.492952] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87773 ] 00:20:30.101 [2024-10-17 20:17:15.663305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.360 [2024-10-17 20:17:15.794786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.360 [2024-10-17 20:17:15.996323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:30.360 [2024-10-17 20:17:15.996599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.927 malloc1 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.927 [2024-10-17 20:17:16.533132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:30.927 [2024-10-17 20:17:16.533200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.927 [2024-10-17 20:17:16.533234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:30.927 [2024-10-17 20:17:16.533250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.927 [2024-10-17 20:17:16.535746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.927 [2024-10-17 20:17:16.535925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:30.927 pt1 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:30.927 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:30.928 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:30.928 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:20:30.928 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.928 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.236 malloc2 00:20:31.236 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.236 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:31.236 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.236 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.236 [2024-10-17 20:17:16.585706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:31.236 [2024-10-17 20:17:16.585775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.236 [2024-10-17 20:17:16.585810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:31.236 [2024-10-17 20:17:16.585826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.236 [2024-10-17 20:17:16.588311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.236 [2024-10-17 20:17:16.588356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:31.236 pt2 00:20:31.236 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.236 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:31.236 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:31.236 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.237 [2024-10-17 20:17:16.593778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:31.237 [2024-10-17 20:17:16.596324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:31.237 [2024-10-17 20:17:16.596571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:31.237 [2024-10-17 20:17:16.596593] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:31.237 [2024-10-17 20:17:16.596696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:31.237 [2024-10-17 20:17:16.596860] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:31.237 [2024-10-17 20:17:16.596878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:31.237 [2024-10-17 20:17:16.597175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.237 "name": "raid_bdev1", 00:20:31.237 "uuid": "d2c06124-76d6-4c38-9691-5e674a0ff434", 00:20:31.237 "strip_size_kb": 0, 00:20:31.237 "state": "online", 00:20:31.237 "raid_level": "raid1", 00:20:31.237 "superblock": true, 00:20:31.237 "num_base_bdevs": 2, 00:20:31.237 "num_base_bdevs_discovered": 2, 00:20:31.237 "num_base_bdevs_operational": 2, 00:20:31.237 "base_bdevs_list": [ 00:20:31.237 { 00:20:31.237 "name": "pt1", 00:20:31.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:31.237 "is_configured": true, 00:20:31.237 "data_offset": 256, 00:20:31.237 "data_size": 7936 00:20:31.237 }, 00:20:31.237 { 00:20:31.237 "name": "pt2", 00:20:31.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:31.237 "is_configured": true, 00:20:31.237 "data_offset": 256, 00:20:31.237 "data_size": 7936 00:20:31.237 } 00:20:31.237 ] 00:20:31.237 }' 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.237 20:17:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.519 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:31.519 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:31.519 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:31.519 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:31.519 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:31.519 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:31.519 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:31.519 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.519 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.519 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:31.519 [2024-10-17 20:17:17.078261] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.519 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.519 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:31.519 "name": "raid_bdev1", 00:20:31.519 "aliases": [ 00:20:31.519 "d2c06124-76d6-4c38-9691-5e674a0ff434" 00:20:31.519 ], 00:20:31.519 "product_name": "Raid Volume", 00:20:31.519 "block_size": 4096, 00:20:31.519 "num_blocks": 7936, 00:20:31.519 "uuid": "d2c06124-76d6-4c38-9691-5e674a0ff434", 00:20:31.519 "md_size": 32, 00:20:31.519 "md_interleave": false, 00:20:31.519 "dif_type": 0, 00:20:31.519 "assigned_rate_limits": { 00:20:31.519 "rw_ios_per_sec": 0, 00:20:31.519 "rw_mbytes_per_sec": 0, 00:20:31.519 "r_mbytes_per_sec": 0, 00:20:31.519 "w_mbytes_per_sec": 0 00:20:31.519 }, 00:20:31.519 "claimed": false, 00:20:31.519 "zoned": false, 00:20:31.519 "supported_io_types": { 00:20:31.519 "read": true, 00:20:31.519 "write": true, 00:20:31.519 "unmap": false, 00:20:31.519 "flush": false, 00:20:31.519 "reset": true, 00:20:31.519 "nvme_admin": false, 00:20:31.519 "nvme_io": false, 00:20:31.519 "nvme_io_md": false, 00:20:31.519 "write_zeroes": true, 00:20:31.519 "zcopy": false, 00:20:31.519 "get_zone_info": false, 00:20:31.519 "zone_management": false, 00:20:31.519 "zone_append": false, 00:20:31.519 "compare": false, 00:20:31.519 "compare_and_write": false, 00:20:31.519 "abort": false, 00:20:31.519 "seek_hole": false, 00:20:31.519 "seek_data": false, 00:20:31.519 "copy": false, 00:20:31.519 "nvme_iov_md": false 00:20:31.519 }, 00:20:31.519 "memory_domains": [ 00:20:31.519 { 00:20:31.519 "dma_device_id": "system", 00:20:31.519 "dma_device_type": 1 00:20:31.519 }, 00:20:31.519 { 00:20:31.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.519 "dma_device_type": 2 00:20:31.520 }, 00:20:31.520 { 00:20:31.520 "dma_device_id": "system", 00:20:31.520 "dma_device_type": 1 00:20:31.520 }, 00:20:31.520 { 00:20:31.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.520 "dma_device_type": 2 00:20:31.520 } 00:20:31.520 ], 00:20:31.520 "driver_specific": { 00:20:31.520 "raid": { 00:20:31.520 "uuid": "d2c06124-76d6-4c38-9691-5e674a0ff434", 00:20:31.520 "strip_size_kb": 0, 00:20:31.520 "state": "online", 00:20:31.520 "raid_level": "raid1", 00:20:31.520 "superblock": true, 00:20:31.520 "num_base_bdevs": 2, 00:20:31.520 "num_base_bdevs_discovered": 2, 00:20:31.520 "num_base_bdevs_operational": 2, 00:20:31.520 "base_bdevs_list": [ 00:20:31.520 { 00:20:31.520 "name": "pt1", 00:20:31.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:31.520 "is_configured": true, 00:20:31.520 "data_offset": 256, 00:20:31.520 "data_size": 7936 00:20:31.520 }, 00:20:31.520 { 00:20:31.520 "name": "pt2", 00:20:31.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:31.520 "is_configured": true, 00:20:31.520 "data_offset": 256, 00:20:31.520 "data_size": 7936 00:20:31.520 } 00:20:31.520 ] 00:20:31.520 } 00:20:31.520 } 00:20:31.520 }' 00:20:31.520 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:31.520 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:31.520 pt2' 00:20:31.520 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:31.777 [2024-10-17 20:17:17.334269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d2c06124-76d6-4c38-9691-5e674a0ff434 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z d2c06124-76d6-4c38-9691-5e674a0ff434 ']' 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.777 [2024-10-17 20:17:17.381917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:31.777 [2024-10-17 20:17:17.381950] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:31.777 [2024-10-17 20:17:17.382092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:31.777 [2024-10-17 20:17:17.382175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:31.777 [2024-10-17 20:17:17.382196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.777 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.034 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:32.034 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:32.034 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:32.034 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:32.034 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.034 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.034 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.034 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:32.034 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:32.034 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.034 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.035 [2024-10-17 20:17:17.517963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:32.035 [2024-10-17 20:17:17.520476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:32.035 [2024-10-17 20:17:17.520726] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:32.035 [2024-10-17 20:17:17.520810] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:32.035 [2024-10-17 20:17:17.520838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:32.035 [2024-10-17 20:17:17.520855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:32.035 request: 00:20:32.035 { 00:20:32.035 "name": "raid_bdev1", 00:20:32.035 "raid_level": "raid1", 00:20:32.035 "base_bdevs": [ 00:20:32.035 "malloc1", 00:20:32.035 "malloc2" 00:20:32.035 ], 00:20:32.035 "superblock": false, 00:20:32.035 "method": "bdev_raid_create", 00:20:32.035 "req_id": 1 00:20:32.035 } 00:20:32.035 Got JSON-RPC error response 00:20:32.035 response: 00:20:32.035 { 00:20:32.035 "code": -17, 00:20:32.035 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:32.035 } 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.035 [2024-10-17 20:17:17.577924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:32.035 [2024-10-17 20:17:17.578125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.035 [2024-10-17 20:17:17.578196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:32.035 [2024-10-17 20:17:17.578321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.035 [2024-10-17 20:17:17.580913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.035 [2024-10-17 20:17:17.581088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:32.035 [2024-10-17 20:17:17.581256] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:32.035 [2024-10-17 20:17:17.581462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:32.035 pt1 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.035 "name": "raid_bdev1", 00:20:32.035 "uuid": "d2c06124-76d6-4c38-9691-5e674a0ff434", 00:20:32.035 "strip_size_kb": 0, 00:20:32.035 "state": "configuring", 00:20:32.035 "raid_level": "raid1", 00:20:32.035 "superblock": true, 00:20:32.035 "num_base_bdevs": 2, 00:20:32.035 "num_base_bdevs_discovered": 1, 00:20:32.035 "num_base_bdevs_operational": 2, 00:20:32.035 "base_bdevs_list": [ 00:20:32.035 { 00:20:32.035 "name": "pt1", 00:20:32.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:32.035 "is_configured": true, 00:20:32.035 "data_offset": 256, 00:20:32.035 "data_size": 7936 00:20:32.035 }, 00:20:32.035 { 00:20:32.035 "name": null, 00:20:32.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:32.035 "is_configured": false, 00:20:32.035 "data_offset": 256, 00:20:32.035 "data_size": 7936 00:20:32.035 } 00:20:32.035 ] 00:20:32.035 }' 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.035 20:17:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.602 [2024-10-17 20:17:18.066094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:32.602 [2024-10-17 20:17:18.066181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.602 [2024-10-17 20:17:18.066215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:32.602 [2024-10-17 20:17:18.066233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.602 [2024-10-17 20:17:18.066528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.602 [2024-10-17 20:17:18.066565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:32.602 [2024-10-17 20:17:18.066636] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:32.602 [2024-10-17 20:17:18.066672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:32.602 [2024-10-17 20:17:18.066820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:32.602 [2024-10-17 20:17:18.066841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:32.602 [2024-10-17 20:17:18.066929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:32.602 [2024-10-17 20:17:18.067100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:32.602 [2024-10-17 20:17:18.067116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:32.602 [2024-10-17 20:17:18.067238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.602 pt2 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.602 "name": "raid_bdev1", 00:20:32.602 "uuid": "d2c06124-76d6-4c38-9691-5e674a0ff434", 00:20:32.602 "strip_size_kb": 0, 00:20:32.602 "state": "online", 00:20:32.602 "raid_level": "raid1", 00:20:32.602 "superblock": true, 00:20:32.602 "num_base_bdevs": 2, 00:20:32.602 "num_base_bdevs_discovered": 2, 00:20:32.602 "num_base_bdevs_operational": 2, 00:20:32.602 "base_bdevs_list": [ 00:20:32.602 { 00:20:32.602 "name": "pt1", 00:20:32.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:32.602 "is_configured": true, 00:20:32.602 "data_offset": 256, 00:20:32.602 "data_size": 7936 00:20:32.602 }, 00:20:32.602 { 00:20:32.602 "name": "pt2", 00:20:32.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:32.602 "is_configured": true, 00:20:32.602 "data_offset": 256, 00:20:32.602 "data_size": 7936 00:20:32.602 } 00:20:32.602 ] 00:20:32.602 }' 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.602 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.169 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:33.169 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:33.169 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:33.169 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:33.169 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:33.169 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:33.169 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:33.169 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:33.169 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.169 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.169 [2024-10-17 20:17:18.586574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:33.169 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.169 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:33.169 "name": "raid_bdev1", 00:20:33.169 "aliases": [ 00:20:33.169 "d2c06124-76d6-4c38-9691-5e674a0ff434" 00:20:33.169 ], 00:20:33.169 "product_name": "Raid Volume", 00:20:33.169 "block_size": 4096, 00:20:33.169 "num_blocks": 7936, 00:20:33.169 "uuid": "d2c06124-76d6-4c38-9691-5e674a0ff434", 00:20:33.169 "md_size": 32, 00:20:33.169 "md_interleave": false, 00:20:33.169 "dif_type": 0, 00:20:33.169 "assigned_rate_limits": { 00:20:33.169 "rw_ios_per_sec": 0, 00:20:33.169 "rw_mbytes_per_sec": 0, 00:20:33.169 "r_mbytes_per_sec": 0, 00:20:33.169 "w_mbytes_per_sec": 0 00:20:33.169 }, 00:20:33.169 "claimed": false, 00:20:33.169 "zoned": false, 00:20:33.169 "supported_io_types": { 00:20:33.169 "read": true, 00:20:33.169 "write": true, 00:20:33.169 "unmap": false, 00:20:33.169 "flush": false, 00:20:33.169 "reset": true, 00:20:33.169 "nvme_admin": false, 00:20:33.169 "nvme_io": false, 00:20:33.169 "nvme_io_md": false, 00:20:33.169 "write_zeroes": true, 00:20:33.169 "zcopy": false, 00:20:33.169 "get_zone_info": false, 00:20:33.169 "zone_management": false, 00:20:33.169 "zone_append": false, 00:20:33.169 "compare": false, 00:20:33.169 "compare_and_write": false, 00:20:33.169 "abort": false, 00:20:33.169 "seek_hole": false, 00:20:33.169 "seek_data": false, 00:20:33.169 "copy": false, 00:20:33.169 "nvme_iov_md": false 00:20:33.169 }, 00:20:33.170 "memory_domains": [ 00:20:33.170 { 00:20:33.170 "dma_device_id": "system", 00:20:33.170 "dma_device_type": 1 00:20:33.170 }, 00:20:33.170 { 00:20:33.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.170 "dma_device_type": 2 00:20:33.170 }, 00:20:33.170 { 00:20:33.170 "dma_device_id": "system", 00:20:33.170 "dma_device_type": 1 00:20:33.170 }, 00:20:33.170 { 00:20:33.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.170 "dma_device_type": 2 00:20:33.170 } 00:20:33.170 ], 00:20:33.170 "driver_specific": { 00:20:33.170 "raid": { 00:20:33.170 "uuid": "d2c06124-76d6-4c38-9691-5e674a0ff434", 00:20:33.170 "strip_size_kb": 0, 00:20:33.170 "state": "online", 00:20:33.170 "raid_level": "raid1", 00:20:33.170 "superblock": true, 00:20:33.170 "num_base_bdevs": 2, 00:20:33.170 "num_base_bdevs_discovered": 2, 00:20:33.170 "num_base_bdevs_operational": 2, 00:20:33.170 "base_bdevs_list": [ 00:20:33.170 { 00:20:33.170 "name": "pt1", 00:20:33.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:33.170 "is_configured": true, 00:20:33.170 "data_offset": 256, 00:20:33.170 "data_size": 7936 00:20:33.170 }, 00:20:33.170 { 00:20:33.170 "name": "pt2", 00:20:33.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:33.170 "is_configured": true, 00:20:33.170 "data_offset": 256, 00:20:33.170 "data_size": 7936 00:20:33.170 } 00:20:33.170 ] 00:20:33.170 } 00:20:33.170 } 00:20:33.170 }' 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:33.170 pt2' 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.170 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.429 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.430 [2024-10-17 20:17:18.830623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' d2c06124-76d6-4c38-9691-5e674a0ff434 '!=' d2c06124-76d6-4c38-9691-5e674a0ff434 ']' 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.430 [2024-10-17 20:17:18.878401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.430 "name": "raid_bdev1", 00:20:33.430 "uuid": "d2c06124-76d6-4c38-9691-5e674a0ff434", 00:20:33.430 "strip_size_kb": 0, 00:20:33.430 "state": "online", 00:20:33.430 "raid_level": "raid1", 00:20:33.430 "superblock": true, 00:20:33.430 "num_base_bdevs": 2, 00:20:33.430 "num_base_bdevs_discovered": 1, 00:20:33.430 "num_base_bdevs_operational": 1, 00:20:33.430 "base_bdevs_list": [ 00:20:33.430 { 00:20:33.430 "name": null, 00:20:33.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.430 "is_configured": false, 00:20:33.430 "data_offset": 0, 00:20:33.430 "data_size": 7936 00:20:33.430 }, 00:20:33.430 { 00:20:33.430 "name": "pt2", 00:20:33.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:33.430 "is_configured": true, 00:20:33.430 "data_offset": 256, 00:20:33.430 "data_size": 7936 00:20:33.430 } 00:20:33.430 ] 00:20:33.430 }' 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.430 20:17:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.997 [2024-10-17 20:17:19.366443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:33.997 [2024-10-17 20:17:19.366477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.997 [2024-10-17 20:17:19.366576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.997 [2024-10-17 20:17:19.366642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.997 [2024-10-17 20:17:19.366663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.997 [2024-10-17 20:17:19.434450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:33.997 [2024-10-17 20:17:19.434536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.997 [2024-10-17 20:17:19.434562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:33.997 [2024-10-17 20:17:19.434580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.997 [2024-10-17 20:17:19.437225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.997 [2024-10-17 20:17:19.437276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:33.997 [2024-10-17 20:17:19.437345] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:33.997 [2024-10-17 20:17:19.437419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:33.997 [2024-10-17 20:17:19.437539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:33.997 [2024-10-17 20:17:19.437561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:33.997 [2024-10-17 20:17:19.437648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:33.997 [2024-10-17 20:17:19.437789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:33.997 [2024-10-17 20:17:19.437803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:33.997 [2024-10-17 20:17:19.437922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.997 pt2 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.997 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.998 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.998 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.998 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.998 "name": "raid_bdev1", 00:20:33.998 "uuid": "d2c06124-76d6-4c38-9691-5e674a0ff434", 00:20:33.998 "strip_size_kb": 0, 00:20:33.998 "state": "online", 00:20:33.998 "raid_level": "raid1", 00:20:33.998 "superblock": true, 00:20:33.998 "num_base_bdevs": 2, 00:20:33.998 "num_base_bdevs_discovered": 1, 00:20:33.998 "num_base_bdevs_operational": 1, 00:20:33.998 "base_bdevs_list": [ 00:20:33.998 { 00:20:33.998 "name": null, 00:20:33.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.998 "is_configured": false, 00:20:33.998 "data_offset": 256, 00:20:33.998 "data_size": 7936 00:20:33.998 }, 00:20:33.998 { 00:20:33.998 "name": "pt2", 00:20:33.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:33.998 "is_configured": true, 00:20:33.998 "data_offset": 256, 00:20:33.998 "data_size": 7936 00:20:33.998 } 00:20:33.998 ] 00:20:33.998 }' 00:20:33.998 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.998 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.565 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:34.565 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.565 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.565 [2024-10-17 20:17:19.950572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.565 [2024-10-17 20:17:19.950763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.565 [2024-10-17 20:17:19.950879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.565 [2024-10-17 20:17:19.950952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.565 [2024-10-17 20:17:19.950969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:34.565 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.565 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.565 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.565 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.565 20:17:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:34.565 20:17:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.565 [2024-10-17 20:17:20.014611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:34.565 [2024-10-17 20:17:20.014812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.565 [2024-10-17 20:17:20.014891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:34.565 [2024-10-17 20:17:20.015030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.565 [2024-10-17 20:17:20.017662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.565 [2024-10-17 20:17:20.017816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:34.565 [2024-10-17 20:17:20.018017] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:34.565 [2024-10-17 20:17:20.018206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:34.565 [2024-10-17 20:17:20.018519] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater pt1 00:20:34.565 than existing raid bdev raid_bdev1 (2) 00:20:34.565 [2024-10-17 20:17:20.018647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.565 [2024-10-17 20:17:20.018687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:34.565 [2024-10-17 20:17:20.018766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:34.565 [2024-10-17 20:17:20.018921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:34.565 [2024-10-17 20:17:20.018939] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:34.565 [2024-10-17 20:17:20.019052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:34.565 [2024-10-17 20:17:20.019192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:34.565 [2024-10-17 20:17:20.019211] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:34.565 [2024-10-17 20:17:20.019346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.565 "name": "raid_bdev1", 00:20:34.565 "uuid": "d2c06124-76d6-4c38-9691-5e674a0ff434", 00:20:34.565 "strip_size_kb": 0, 00:20:34.565 "state": "online", 00:20:34.565 "raid_level": "raid1", 00:20:34.565 "superblock": true, 00:20:34.565 "num_base_bdevs": 2, 00:20:34.565 "num_base_bdevs_discovered": 1, 00:20:34.565 "num_base_bdevs_operational": 1, 00:20:34.565 "base_bdevs_list": [ 00:20:34.565 { 00:20:34.565 "name": null, 00:20:34.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.565 "is_configured": false, 00:20:34.565 "data_offset": 256, 00:20:34.565 "data_size": 7936 00:20:34.565 }, 00:20:34.565 { 00:20:34.565 "name": "pt2", 00:20:34.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:34.565 "is_configured": true, 00:20:34.565 "data_offset": 256, 00:20:34.565 "data_size": 7936 00:20:34.565 } 00:20:34.565 ] 00:20:34.565 }' 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.565 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.133 [2024-10-17 20:17:20.583152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' d2c06124-76d6-4c38-9691-5e674a0ff434 '!=' d2c06124-76d6-4c38-9691-5e674a0ff434 ']' 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87773 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87773 ']' 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 87773 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87773 00:20:35.133 killing process with pid 87773 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87773' 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 87773 00:20:35.133 [2024-10-17 20:17:20.654844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:35.133 20:17:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 87773 00:20:35.133 [2024-10-17 20:17:20.654954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:35.133 [2024-10-17 20:17:20.655049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:35.133 [2024-10-17 20:17:20.655088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:35.391 [2024-10-17 20:17:20.849206] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:36.355 20:17:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:20:36.355 00:20:36.355 real 0m6.442s 00:20:36.355 user 0m10.233s 00:20:36.355 sys 0m0.925s 00:20:36.355 20:17:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:36.355 20:17:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.355 ************************************ 00:20:36.355 END TEST raid_superblock_test_md_separate 00:20:36.355 ************************************ 00:20:36.355 20:17:21 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:20:36.355 20:17:21 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:20:36.355 20:17:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:36.355 20:17:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:36.355 20:17:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.355 ************************************ 00:20:36.355 START TEST raid_rebuild_test_sb_md_separate 00:20:36.355 ************************************ 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88102 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88102 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 88102 ']' 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:36.355 20:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.355 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:36.355 Zero copy mechanism will not be used. 00:20:36.355 [2024-10-17 20:17:21.997714] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:20:36.356 [2024-10-17 20:17:21.997900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88102 ] 00:20:36.614 [2024-10-17 20:17:22.173179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.872 [2024-10-17 20:17:22.331293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.130 [2024-10-17 20:17:22.550480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:37.130 [2024-10-17 20:17:22.550540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:37.388 20:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.388 20:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:20:37.388 20:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:37.388 20:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:20:37.388 20:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.388 20:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.388 BaseBdev1_malloc 00:20:37.388 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.388 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:37.388 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.388 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.388 [2024-10-17 20:17:23.037505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:37.388 [2024-10-17 20:17:23.037580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.388 [2024-10-17 20:17:23.037613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:37.388 [2024-10-17 20:17:23.037633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.646 [2024-10-17 20:17:23.040742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.646 [2024-10-17 20:17:23.040802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:37.646 BaseBdev1 00:20:37.646 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.647 BaseBdev2_malloc 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.647 [2024-10-17 20:17:23.090656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:37.647 [2024-10-17 20:17:23.090744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.647 [2024-10-17 20:17:23.090776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:37.647 [2024-10-17 20:17:23.090795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.647 [2024-10-17 20:17:23.093284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.647 [2024-10-17 20:17:23.093465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:37.647 BaseBdev2 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.647 spare_malloc 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.647 spare_delay 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.647 [2024-10-17 20:17:23.164097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:37.647 [2024-10-17 20:17:23.164186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.647 [2024-10-17 20:17:23.164218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:37.647 [2024-10-17 20:17:23.164237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.647 [2024-10-17 20:17:23.166781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.647 [2024-10-17 20:17:23.166834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:37.647 spare 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.647 [2024-10-17 20:17:23.172169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:37.647 [2024-10-17 20:17:23.174575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:37.647 [2024-10-17 20:17:23.174822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:37.647 [2024-10-17 20:17:23.174847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:37.647 [2024-10-17 20:17:23.174942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:37.647 [2024-10-17 20:17:23.175147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:37.647 [2024-10-17 20:17:23.175164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:37.647 [2024-10-17 20:17:23.175305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.647 "name": "raid_bdev1", 00:20:37.647 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:37.647 "strip_size_kb": 0, 00:20:37.647 "state": "online", 00:20:37.647 "raid_level": "raid1", 00:20:37.647 "superblock": true, 00:20:37.647 "num_base_bdevs": 2, 00:20:37.647 "num_base_bdevs_discovered": 2, 00:20:37.647 "num_base_bdevs_operational": 2, 00:20:37.647 "base_bdevs_list": [ 00:20:37.647 { 00:20:37.647 "name": "BaseBdev1", 00:20:37.647 "uuid": "e20d7f5c-a66e-5853-93eb-fb899e1b5242", 00:20:37.647 "is_configured": true, 00:20:37.647 "data_offset": 256, 00:20:37.647 "data_size": 7936 00:20:37.647 }, 00:20:37.647 { 00:20:37.647 "name": "BaseBdev2", 00:20:37.647 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:37.647 "is_configured": true, 00:20:37.647 "data_offset": 256, 00:20:37.647 "data_size": 7936 00:20:37.647 } 00:20:37.647 ] 00:20:37.647 }' 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.647 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.214 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:38.214 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:38.214 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.214 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.214 [2024-10-17 20:17:23.688678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:38.215 20:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:38.473 [2024-10-17 20:17:24.068505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:38.473 /dev/nbd0 00:20:38.473 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:38.473 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:38.473 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:38.473 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:20:38.473 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:38.473 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:38.473 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:38.473 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:20:38.473 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:38.473 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:38.473 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.473 1+0 records in 00:20:38.473 1+0 records out 00:20:38.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325084 s, 12.6 MB/s 00:20:38.473 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.732 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:20:38.732 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.732 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:38.732 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:20:38.732 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:38.732 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:38.732 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:38.732 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:38.732 20:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:39.667 7936+0 records in 00:20:39.667 7936+0 records out 00:20:39.667 32505856 bytes (33 MB, 31 MiB) copied, 0.918233 s, 35.4 MB/s 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:39.667 [2024-10-17 20:17:25.304513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:39.667 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:39.925 [2024-10-17 20:17:25.316878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.925 "name": "raid_bdev1", 00:20:39.925 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:39.925 "strip_size_kb": 0, 00:20:39.925 "state": "online", 00:20:39.925 "raid_level": "raid1", 00:20:39.925 "superblock": true, 00:20:39.925 "num_base_bdevs": 2, 00:20:39.925 "num_base_bdevs_discovered": 1, 00:20:39.925 "num_base_bdevs_operational": 1, 00:20:39.925 "base_bdevs_list": [ 00:20:39.925 { 00:20:39.925 "name": null, 00:20:39.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.925 "is_configured": false, 00:20:39.925 "data_offset": 0, 00:20:39.925 "data_size": 7936 00:20:39.925 }, 00:20:39.925 { 00:20:39.925 "name": "BaseBdev2", 00:20:39.925 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:39.925 "is_configured": true, 00:20:39.925 "data_offset": 256, 00:20:39.925 "data_size": 7936 00:20:39.925 } 00:20:39.925 ] 00:20:39.925 }' 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.925 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.183 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:40.183 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.183 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.183 [2024-10-17 20:17:25.821049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:40.183 [2024-10-17 20:17:25.834888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:40.441 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.441 20:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:40.441 [2024-10-17 20:17:25.837348] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.384 "name": "raid_bdev1", 00:20:41.384 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:41.384 "strip_size_kb": 0, 00:20:41.384 "state": "online", 00:20:41.384 "raid_level": "raid1", 00:20:41.384 "superblock": true, 00:20:41.384 "num_base_bdevs": 2, 00:20:41.384 "num_base_bdevs_discovered": 2, 00:20:41.384 "num_base_bdevs_operational": 2, 00:20:41.384 "process": { 00:20:41.384 "type": "rebuild", 00:20:41.384 "target": "spare", 00:20:41.384 "progress": { 00:20:41.384 "blocks": 2560, 00:20:41.384 "percent": 32 00:20:41.384 } 00:20:41.384 }, 00:20:41.384 "base_bdevs_list": [ 00:20:41.384 { 00:20:41.384 "name": "spare", 00:20:41.384 "uuid": "f27c25a9-4068-5530-b4d8-6865924160ee", 00:20:41.384 "is_configured": true, 00:20:41.384 "data_offset": 256, 00:20:41.384 "data_size": 7936 00:20:41.384 }, 00:20:41.384 { 00:20:41.384 "name": "BaseBdev2", 00:20:41.384 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:41.384 "is_configured": true, 00:20:41.384 "data_offset": 256, 00:20:41.384 "data_size": 7936 00:20:41.384 } 00:20:41.384 ] 00:20:41.384 }' 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:41.384 20:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.384 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:41.384 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:41.384 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.384 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.384 [2024-10-17 20:17:27.019625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:41.642 [2024-10-17 20:17:27.047124] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:41.642 [2024-10-17 20:17:27.047220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.642 [2024-10-17 20:17:27.047244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:41.642 [2024-10-17 20:17:27.047262] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.642 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.642 "name": "raid_bdev1", 00:20:41.642 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:41.642 "strip_size_kb": 0, 00:20:41.642 "state": "online", 00:20:41.642 "raid_level": "raid1", 00:20:41.642 "superblock": true, 00:20:41.642 "num_base_bdevs": 2, 00:20:41.642 "num_base_bdevs_discovered": 1, 00:20:41.642 "num_base_bdevs_operational": 1, 00:20:41.643 "base_bdevs_list": [ 00:20:41.643 { 00:20:41.643 "name": null, 00:20:41.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.643 "is_configured": false, 00:20:41.643 "data_offset": 0, 00:20:41.643 "data_size": 7936 00:20:41.643 }, 00:20:41.643 { 00:20:41.643 "name": "BaseBdev2", 00:20:41.643 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:41.643 "is_configured": true, 00:20:41.643 "data_offset": 256, 00:20:41.643 "data_size": 7936 00:20:41.643 } 00:20:41.643 ] 00:20:41.643 }' 00:20:41.643 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.643 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.209 "name": "raid_bdev1", 00:20:42.209 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:42.209 "strip_size_kb": 0, 00:20:42.209 "state": "online", 00:20:42.209 "raid_level": "raid1", 00:20:42.209 "superblock": true, 00:20:42.209 "num_base_bdevs": 2, 00:20:42.209 "num_base_bdevs_discovered": 1, 00:20:42.209 "num_base_bdevs_operational": 1, 00:20:42.209 "base_bdevs_list": [ 00:20:42.209 { 00:20:42.209 "name": null, 00:20:42.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.209 "is_configured": false, 00:20:42.209 "data_offset": 0, 00:20:42.209 "data_size": 7936 00:20:42.209 }, 00:20:42.209 { 00:20:42.209 "name": "BaseBdev2", 00:20:42.209 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:42.209 "is_configured": true, 00:20:42.209 "data_offset": 256, 00:20:42.209 "data_size": 7936 00:20:42.209 } 00:20:42.209 ] 00:20:42.209 }' 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.209 [2024-10-17 20:17:27.721819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:42.209 [2024-10-17 20:17:27.734868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.209 20:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:42.209 [2024-10-17 20:17:27.737377] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:43.145 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.145 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:43.145 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:43.145 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:43.145 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:43.145 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.145 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.145 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.145 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.145 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.145 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:43.145 "name": "raid_bdev1", 00:20:43.145 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:43.145 "strip_size_kb": 0, 00:20:43.145 "state": "online", 00:20:43.145 "raid_level": "raid1", 00:20:43.145 "superblock": true, 00:20:43.145 "num_base_bdevs": 2, 00:20:43.145 "num_base_bdevs_discovered": 2, 00:20:43.145 "num_base_bdevs_operational": 2, 00:20:43.145 "process": { 00:20:43.145 "type": "rebuild", 00:20:43.145 "target": "spare", 00:20:43.145 "progress": { 00:20:43.145 "blocks": 2560, 00:20:43.145 "percent": 32 00:20:43.145 } 00:20:43.145 }, 00:20:43.145 "base_bdevs_list": [ 00:20:43.145 { 00:20:43.145 "name": "spare", 00:20:43.145 "uuid": "f27c25a9-4068-5530-b4d8-6865924160ee", 00:20:43.145 "is_configured": true, 00:20:43.145 "data_offset": 256, 00:20:43.145 "data_size": 7936 00:20:43.145 }, 00:20:43.145 { 00:20:43.145 "name": "BaseBdev2", 00:20:43.145 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:43.145 "is_configured": true, 00:20:43.145 "data_offset": 256, 00:20:43.145 "data_size": 7936 00:20:43.145 } 00:20:43.145 ] 00:20:43.145 }' 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:43.403 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=763 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.403 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:43.403 "name": "raid_bdev1", 00:20:43.403 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:43.403 "strip_size_kb": 0, 00:20:43.403 "state": "online", 00:20:43.403 "raid_level": "raid1", 00:20:43.403 "superblock": true, 00:20:43.403 "num_base_bdevs": 2, 00:20:43.403 "num_base_bdevs_discovered": 2, 00:20:43.403 "num_base_bdevs_operational": 2, 00:20:43.403 "process": { 00:20:43.403 "type": "rebuild", 00:20:43.403 "target": "spare", 00:20:43.403 "progress": { 00:20:43.403 "blocks": 2816, 00:20:43.403 "percent": 35 00:20:43.403 } 00:20:43.403 }, 00:20:43.403 "base_bdevs_list": [ 00:20:43.403 { 00:20:43.404 "name": "spare", 00:20:43.404 "uuid": "f27c25a9-4068-5530-b4d8-6865924160ee", 00:20:43.404 "is_configured": true, 00:20:43.404 "data_offset": 256, 00:20:43.404 "data_size": 7936 00:20:43.404 }, 00:20:43.404 { 00:20:43.404 "name": "BaseBdev2", 00:20:43.404 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:43.404 "is_configured": true, 00:20:43.404 "data_offset": 256, 00:20:43.404 "data_size": 7936 00:20:43.404 } 00:20:43.404 ] 00:20:43.404 }' 00:20:43.404 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:43.404 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:43.404 20:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:43.404 20:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.404 20:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:44.779 "name": "raid_bdev1", 00:20:44.779 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:44.779 "strip_size_kb": 0, 00:20:44.779 "state": "online", 00:20:44.779 "raid_level": "raid1", 00:20:44.779 "superblock": true, 00:20:44.779 "num_base_bdevs": 2, 00:20:44.779 "num_base_bdevs_discovered": 2, 00:20:44.779 "num_base_bdevs_operational": 2, 00:20:44.779 "process": { 00:20:44.779 "type": "rebuild", 00:20:44.779 "target": "spare", 00:20:44.779 "progress": { 00:20:44.779 "blocks": 5632, 00:20:44.779 "percent": 70 00:20:44.779 } 00:20:44.779 }, 00:20:44.779 "base_bdevs_list": [ 00:20:44.779 { 00:20:44.779 "name": "spare", 00:20:44.779 "uuid": "f27c25a9-4068-5530-b4d8-6865924160ee", 00:20:44.779 "is_configured": true, 00:20:44.779 "data_offset": 256, 00:20:44.779 "data_size": 7936 00:20:44.779 }, 00:20:44.779 { 00:20:44.779 "name": "BaseBdev2", 00:20:44.779 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:44.779 "is_configured": true, 00:20:44.779 "data_offset": 256, 00:20:44.779 "data_size": 7936 00:20:44.779 } 00:20:44.779 ] 00:20:44.779 }' 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:44.779 20:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:45.345 [2024-10-17 20:17:30.860649] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:45.345 [2024-10-17 20:17:30.860764] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:45.345 [2024-10-17 20:17:30.860951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.603 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:45.603 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:45.603 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.603 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:45.603 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:45.603 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.603 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.603 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.603 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.603 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.603 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.603 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.603 "name": "raid_bdev1", 00:20:45.603 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:45.603 "strip_size_kb": 0, 00:20:45.603 "state": "online", 00:20:45.603 "raid_level": "raid1", 00:20:45.603 "superblock": true, 00:20:45.603 "num_base_bdevs": 2, 00:20:45.603 "num_base_bdevs_discovered": 2, 00:20:45.603 "num_base_bdevs_operational": 2, 00:20:45.603 "base_bdevs_list": [ 00:20:45.603 { 00:20:45.603 "name": "spare", 00:20:45.603 "uuid": "f27c25a9-4068-5530-b4d8-6865924160ee", 00:20:45.603 "is_configured": true, 00:20:45.603 "data_offset": 256, 00:20:45.603 "data_size": 7936 00:20:45.603 }, 00:20:45.603 { 00:20:45.603 "name": "BaseBdev2", 00:20:45.603 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:45.603 "is_configured": true, 00:20:45.603 "data_offset": 256, 00:20:45.603 "data_size": 7936 00:20:45.603 } 00:20:45.603 ] 00:20:45.603 }' 00:20:45.603 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.863 "name": "raid_bdev1", 00:20:45.863 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:45.863 "strip_size_kb": 0, 00:20:45.863 "state": "online", 00:20:45.863 "raid_level": "raid1", 00:20:45.863 "superblock": true, 00:20:45.863 "num_base_bdevs": 2, 00:20:45.863 "num_base_bdevs_discovered": 2, 00:20:45.863 "num_base_bdevs_operational": 2, 00:20:45.863 "base_bdevs_list": [ 00:20:45.863 { 00:20:45.863 "name": "spare", 00:20:45.863 "uuid": "f27c25a9-4068-5530-b4d8-6865924160ee", 00:20:45.863 "is_configured": true, 00:20:45.863 "data_offset": 256, 00:20:45.863 "data_size": 7936 00:20:45.863 }, 00:20:45.863 { 00:20:45.863 "name": "BaseBdev2", 00:20:45.863 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:45.863 "is_configured": true, 00:20:45.863 "data_offset": 256, 00:20:45.863 "data_size": 7936 00:20:45.863 } 00:20:45.863 ] 00:20:45.863 }' 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.863 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.128 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.128 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.128 "name": "raid_bdev1", 00:20:46.128 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:46.128 "strip_size_kb": 0, 00:20:46.128 "state": "online", 00:20:46.128 "raid_level": "raid1", 00:20:46.128 "superblock": true, 00:20:46.128 "num_base_bdevs": 2, 00:20:46.128 "num_base_bdevs_discovered": 2, 00:20:46.128 "num_base_bdevs_operational": 2, 00:20:46.128 "base_bdevs_list": [ 00:20:46.128 { 00:20:46.128 "name": "spare", 00:20:46.128 "uuid": "f27c25a9-4068-5530-b4d8-6865924160ee", 00:20:46.128 "is_configured": true, 00:20:46.128 "data_offset": 256, 00:20:46.128 "data_size": 7936 00:20:46.128 }, 00:20:46.128 { 00:20:46.128 "name": "BaseBdev2", 00:20:46.128 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:46.128 "is_configured": true, 00:20:46.128 "data_offset": 256, 00:20:46.128 "data_size": 7936 00:20:46.128 } 00:20:46.128 ] 00:20:46.128 }' 00:20:46.128 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.128 20:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.414 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:46.414 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.414 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.414 [2024-10-17 20:17:32.011467] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:46.414 [2024-10-17 20:17:32.011508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:46.414 [2024-10-17 20:17:32.011608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:46.414 [2024-10-17 20:17:32.011706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:46.414 [2024-10-17 20:17:32.011723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:46.415 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.415 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.415 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.415 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:20:46.415 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.415 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.672 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:46.672 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:46.672 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:46.672 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:46.672 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:46.672 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:46.672 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:46.672 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:46.672 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:46.672 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:46.672 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:46.672 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:46.672 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:46.930 /dev/nbd0 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:46.930 1+0 records in 00:20:46.930 1+0 records out 00:20:46.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493379 s, 8.3 MB/s 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:46.930 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:47.189 /dev/nbd1 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.189 1+0 records in 00:20:47.189 1+0 records out 00:20:47.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458801 s, 8.9 MB/s 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:47.189 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:47.447 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:47.447 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:47.447 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:47.447 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:47.447 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:47.447 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:47.447 20:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:47.705 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:47.705 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:47.705 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:47.705 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:47.705 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:47.705 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:47.705 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:47.705 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:47.705 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:47.705 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.964 [2024-10-17 20:17:33.505832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:47.964 [2024-10-17 20:17:33.505898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.964 [2024-10-17 20:17:33.505933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:47.964 [2024-10-17 20:17:33.505948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.964 [2024-10-17 20:17:33.508569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.964 [2024-10-17 20:17:33.508615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:47.964 [2024-10-17 20:17:33.508701] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:47.964 [2024-10-17 20:17:33.508777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:47.964 [2024-10-17 20:17:33.508952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:47.964 spare 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.964 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.964 [2024-10-17 20:17:33.609099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:47.964 [2024-10-17 20:17:33.609158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:47.964 [2024-10-17 20:17:33.609310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:47.964 [2024-10-17 20:17:33.609515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:47.965 [2024-10-17 20:17:33.609531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:47.965 [2024-10-17 20:17:33.609701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.965 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.965 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:47.965 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.965 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.965 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.965 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.965 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:47.965 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.965 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.965 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.965 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.224 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.224 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.224 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.224 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.224 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.224 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.224 "name": "raid_bdev1", 00:20:48.224 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:48.224 "strip_size_kb": 0, 00:20:48.224 "state": "online", 00:20:48.224 "raid_level": "raid1", 00:20:48.224 "superblock": true, 00:20:48.224 "num_base_bdevs": 2, 00:20:48.224 "num_base_bdevs_discovered": 2, 00:20:48.224 "num_base_bdevs_operational": 2, 00:20:48.224 "base_bdevs_list": [ 00:20:48.224 { 00:20:48.224 "name": "spare", 00:20:48.224 "uuid": "f27c25a9-4068-5530-b4d8-6865924160ee", 00:20:48.224 "is_configured": true, 00:20:48.224 "data_offset": 256, 00:20:48.224 "data_size": 7936 00:20:48.224 }, 00:20:48.224 { 00:20:48.224 "name": "BaseBdev2", 00:20:48.224 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:48.224 "is_configured": true, 00:20:48.224 "data_offset": 256, 00:20:48.224 "data_size": 7936 00:20:48.224 } 00:20:48.224 ] 00:20:48.224 }' 00:20:48.224 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.224 20:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.482 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:48.482 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:48.482 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:48.482 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:48.482 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:48.482 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.482 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.482 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.482 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:48.741 "name": "raid_bdev1", 00:20:48.741 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:48.741 "strip_size_kb": 0, 00:20:48.741 "state": "online", 00:20:48.741 "raid_level": "raid1", 00:20:48.741 "superblock": true, 00:20:48.741 "num_base_bdevs": 2, 00:20:48.741 "num_base_bdevs_discovered": 2, 00:20:48.741 "num_base_bdevs_operational": 2, 00:20:48.741 "base_bdevs_list": [ 00:20:48.741 { 00:20:48.741 "name": "spare", 00:20:48.741 "uuid": "f27c25a9-4068-5530-b4d8-6865924160ee", 00:20:48.741 "is_configured": true, 00:20:48.741 "data_offset": 256, 00:20:48.741 "data_size": 7936 00:20:48.741 }, 00:20:48.741 { 00:20:48.741 "name": "BaseBdev2", 00:20:48.741 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:48.741 "is_configured": true, 00:20:48.741 "data_offset": 256, 00:20:48.741 "data_size": 7936 00:20:48.741 } 00:20:48.741 ] 00:20:48.741 }' 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.741 [2024-10-17 20:17:34.330116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.741 "name": "raid_bdev1", 00:20:48.741 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:48.741 "strip_size_kb": 0, 00:20:48.741 "state": "online", 00:20:48.741 "raid_level": "raid1", 00:20:48.741 "superblock": true, 00:20:48.741 "num_base_bdevs": 2, 00:20:48.741 "num_base_bdevs_discovered": 1, 00:20:48.741 "num_base_bdevs_operational": 1, 00:20:48.741 "base_bdevs_list": [ 00:20:48.741 { 00:20:48.741 "name": null, 00:20:48.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.741 "is_configured": false, 00:20:48.741 "data_offset": 0, 00:20:48.741 "data_size": 7936 00:20:48.741 }, 00:20:48.741 { 00:20:48.741 "name": "BaseBdev2", 00:20:48.741 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:48.741 "is_configured": true, 00:20:48.741 "data_offset": 256, 00:20:48.741 "data_size": 7936 00:20:48.741 } 00:20:48.741 ] 00:20:48.741 }' 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.741 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.309 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.309 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.309 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.309 [2024-10-17 20:17:34.838300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.309 [2024-10-17 20:17:34.838695] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:49.309 [2024-10-17 20:17:34.838730] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:49.309 [2024-10-17 20:17:34.838787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.309 [2024-10-17 20:17:34.851506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:49.309 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.309 20:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:49.309 [2024-10-17 20:17:34.854052] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:50.245 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.245 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.245 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:50.245 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:50.245 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.245 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.245 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.245 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.245 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.245 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.508 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.508 "name": "raid_bdev1", 00:20:50.508 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:50.508 "strip_size_kb": 0, 00:20:50.508 "state": "online", 00:20:50.508 "raid_level": "raid1", 00:20:50.508 "superblock": true, 00:20:50.508 "num_base_bdevs": 2, 00:20:50.508 "num_base_bdevs_discovered": 2, 00:20:50.508 "num_base_bdevs_operational": 2, 00:20:50.508 "process": { 00:20:50.508 "type": "rebuild", 00:20:50.508 "target": "spare", 00:20:50.508 "progress": { 00:20:50.508 "blocks": 2560, 00:20:50.508 "percent": 32 00:20:50.508 } 00:20:50.508 }, 00:20:50.508 "base_bdevs_list": [ 00:20:50.508 { 00:20:50.508 "name": "spare", 00:20:50.508 "uuid": "f27c25a9-4068-5530-b4d8-6865924160ee", 00:20:50.508 "is_configured": true, 00:20:50.508 "data_offset": 256, 00:20:50.508 "data_size": 7936 00:20:50.508 }, 00:20:50.508 { 00:20:50.508 "name": "BaseBdev2", 00:20:50.508 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:50.508 "is_configured": true, 00:20:50.508 "data_offset": 256, 00:20:50.508 "data_size": 7936 00:20:50.508 } 00:20:50.508 ] 00:20:50.508 }' 00:20:50.508 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.508 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.508 20:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.508 [2024-10-17 20:17:36.019782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.508 [2024-10-17 20:17:36.063144] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:50.508 [2024-10-17 20:17:36.063258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.508 [2024-10-17 20:17:36.063281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.508 [2024-10-17 20:17:36.063310] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.508 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.508 "name": "raid_bdev1", 00:20:50.508 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:50.508 "strip_size_kb": 0, 00:20:50.508 "state": "online", 00:20:50.508 "raid_level": "raid1", 00:20:50.508 "superblock": true, 00:20:50.508 "num_base_bdevs": 2, 00:20:50.508 "num_base_bdevs_discovered": 1, 00:20:50.509 "num_base_bdevs_operational": 1, 00:20:50.509 "base_bdevs_list": [ 00:20:50.509 { 00:20:50.509 "name": null, 00:20:50.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.509 "is_configured": false, 00:20:50.509 "data_offset": 0, 00:20:50.509 "data_size": 7936 00:20:50.509 }, 00:20:50.509 { 00:20:50.509 "name": "BaseBdev2", 00:20:50.509 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:50.509 "is_configured": true, 00:20:50.509 "data_offset": 256, 00:20:50.509 "data_size": 7936 00:20:50.509 } 00:20:50.509 ] 00:20:50.509 }' 00:20:50.509 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.509 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.094 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:51.094 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.094 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.094 [2024-10-17 20:17:36.605611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:51.094 [2024-10-17 20:17:36.605830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.094 [2024-10-17 20:17:36.605878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:51.094 [2024-10-17 20:17:36.605899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.094 [2024-10-17 20:17:36.606242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.094 [2024-10-17 20:17:36.606274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:51.094 [2024-10-17 20:17:36.606358] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:51.094 [2024-10-17 20:17:36.606381] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:51.094 [2024-10-17 20:17:36.606396] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:51.094 [2024-10-17 20:17:36.606437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:51.094 [2024-10-17 20:17:36.619328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:51.094 spare 00:20:51.094 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.094 20:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:51.094 [2024-10-17 20:17:36.621801] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:52.031 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.031 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.031 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.031 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.031 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.031 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.031 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.031 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.031 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.031 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.290 "name": "raid_bdev1", 00:20:52.290 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:52.290 "strip_size_kb": 0, 00:20:52.290 "state": "online", 00:20:52.290 "raid_level": "raid1", 00:20:52.290 "superblock": true, 00:20:52.290 "num_base_bdevs": 2, 00:20:52.290 "num_base_bdevs_discovered": 2, 00:20:52.290 "num_base_bdevs_operational": 2, 00:20:52.290 "process": { 00:20:52.290 "type": "rebuild", 00:20:52.290 "target": "spare", 00:20:52.290 "progress": { 00:20:52.290 "blocks": 2560, 00:20:52.290 "percent": 32 00:20:52.290 } 00:20:52.290 }, 00:20:52.290 "base_bdevs_list": [ 00:20:52.290 { 00:20:52.290 "name": "spare", 00:20:52.290 "uuid": "f27c25a9-4068-5530-b4d8-6865924160ee", 00:20:52.290 "is_configured": true, 00:20:52.290 "data_offset": 256, 00:20:52.290 "data_size": 7936 00:20:52.290 }, 00:20:52.290 { 00:20:52.290 "name": "BaseBdev2", 00:20:52.290 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:52.290 "is_configured": true, 00:20:52.290 "data_offset": 256, 00:20:52.290 "data_size": 7936 00:20:52.290 } 00:20:52.290 ] 00:20:52.290 }' 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.290 [2024-10-17 20:17:37.791753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:52.290 [2024-10-17 20:17:37.831273] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:52.290 [2024-10-17 20:17:37.831685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.290 [2024-10-17 20:17:37.831722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:52.290 [2024-10-17 20:17:37.831735] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.290 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.290 "name": "raid_bdev1", 00:20:52.290 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:52.290 "strip_size_kb": 0, 00:20:52.290 "state": "online", 00:20:52.290 "raid_level": "raid1", 00:20:52.290 "superblock": true, 00:20:52.290 "num_base_bdevs": 2, 00:20:52.290 "num_base_bdevs_discovered": 1, 00:20:52.290 "num_base_bdevs_operational": 1, 00:20:52.290 "base_bdevs_list": [ 00:20:52.290 { 00:20:52.290 "name": null, 00:20:52.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.290 "is_configured": false, 00:20:52.290 "data_offset": 0, 00:20:52.290 "data_size": 7936 00:20:52.290 }, 00:20:52.290 { 00:20:52.290 "name": "BaseBdev2", 00:20:52.290 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:52.290 "is_configured": true, 00:20:52.291 "data_offset": 256, 00:20:52.291 "data_size": 7936 00:20:52.291 } 00:20:52.291 ] 00:20:52.291 }' 00:20:52.291 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.291 20:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.858 "name": "raid_bdev1", 00:20:52.858 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:52.858 "strip_size_kb": 0, 00:20:52.858 "state": "online", 00:20:52.858 "raid_level": "raid1", 00:20:52.858 "superblock": true, 00:20:52.858 "num_base_bdevs": 2, 00:20:52.858 "num_base_bdevs_discovered": 1, 00:20:52.858 "num_base_bdevs_operational": 1, 00:20:52.858 "base_bdevs_list": [ 00:20:52.858 { 00:20:52.858 "name": null, 00:20:52.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.858 "is_configured": false, 00:20:52.858 "data_offset": 0, 00:20:52.858 "data_size": 7936 00:20:52.858 }, 00:20:52.858 { 00:20:52.858 "name": "BaseBdev2", 00:20:52.858 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:52.858 "is_configured": true, 00:20:52.858 "data_offset": 256, 00:20:52.858 "data_size": 7936 00:20:52.858 } 00:20:52.858 ] 00:20:52.858 }' 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:52.858 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.117 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:53.117 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:53.117 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.117 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.117 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.117 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:53.117 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.117 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.117 [2024-10-17 20:17:38.530263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:53.117 [2024-10-17 20:17:38.530459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.117 [2024-10-17 20:17:38.530509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:53.117 [2024-10-17 20:17:38.530526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.117 [2024-10-17 20:17:38.530795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.117 [2024-10-17 20:17:38.530817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:53.117 [2024-10-17 20:17:38.530887] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:53.117 [2024-10-17 20:17:38.530906] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:53.117 [2024-10-17 20:17:38.530927] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:53.117 [2024-10-17 20:17:38.530940] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:53.117 BaseBdev1 00:20:53.117 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.117 20:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.053 "name": "raid_bdev1", 00:20:54.053 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:54.053 "strip_size_kb": 0, 00:20:54.053 "state": "online", 00:20:54.053 "raid_level": "raid1", 00:20:54.053 "superblock": true, 00:20:54.053 "num_base_bdevs": 2, 00:20:54.053 "num_base_bdevs_discovered": 1, 00:20:54.053 "num_base_bdevs_operational": 1, 00:20:54.053 "base_bdevs_list": [ 00:20:54.053 { 00:20:54.053 "name": null, 00:20:54.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.053 "is_configured": false, 00:20:54.053 "data_offset": 0, 00:20:54.053 "data_size": 7936 00:20:54.053 }, 00:20:54.053 { 00:20:54.053 "name": "BaseBdev2", 00:20:54.053 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:54.053 "is_configured": true, 00:20:54.053 "data_offset": 256, 00:20:54.053 "data_size": 7936 00:20:54.053 } 00:20:54.053 ] 00:20:54.053 }' 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.053 20:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:54.621 "name": "raid_bdev1", 00:20:54.621 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:54.621 "strip_size_kb": 0, 00:20:54.621 "state": "online", 00:20:54.621 "raid_level": "raid1", 00:20:54.621 "superblock": true, 00:20:54.621 "num_base_bdevs": 2, 00:20:54.621 "num_base_bdevs_discovered": 1, 00:20:54.621 "num_base_bdevs_operational": 1, 00:20:54.621 "base_bdevs_list": [ 00:20:54.621 { 00:20:54.621 "name": null, 00:20:54.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.621 "is_configured": false, 00:20:54.621 "data_offset": 0, 00:20:54.621 "data_size": 7936 00:20:54.621 }, 00:20:54.621 { 00:20:54.621 "name": "BaseBdev2", 00:20:54.621 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:54.621 "is_configured": true, 00:20:54.621 "data_offset": 256, 00:20:54.621 "data_size": 7936 00:20:54.621 } 00:20:54.621 ] 00:20:54.621 }' 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:54.621 [2024-10-17 20:17:40.214820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.621 [2024-10-17 20:17:40.215028] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:54.621 [2024-10-17 20:17:40.215330] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:54.621 request: 00:20:54.621 { 00:20:54.621 "base_bdev": "BaseBdev1", 00:20:54.621 "raid_bdev": "raid_bdev1", 00:20:54.621 "method": "bdev_raid_add_base_bdev", 00:20:54.621 "req_id": 1 00:20:54.621 } 00:20:54.621 Got JSON-RPC error response 00:20:54.621 response: 00:20:54.621 { 00:20:54.621 "code": -22, 00:20:54.621 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:54.621 } 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.621 20:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.624 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.882 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.882 "name": "raid_bdev1", 00:20:55.882 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:55.882 "strip_size_kb": 0, 00:20:55.882 "state": "online", 00:20:55.882 "raid_level": "raid1", 00:20:55.882 "superblock": true, 00:20:55.882 "num_base_bdevs": 2, 00:20:55.882 "num_base_bdevs_discovered": 1, 00:20:55.882 "num_base_bdevs_operational": 1, 00:20:55.882 "base_bdevs_list": [ 00:20:55.882 { 00:20:55.882 "name": null, 00:20:55.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.882 "is_configured": false, 00:20:55.882 "data_offset": 0, 00:20:55.882 "data_size": 7936 00:20:55.882 }, 00:20:55.882 { 00:20:55.882 "name": "BaseBdev2", 00:20:55.882 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:55.882 "is_configured": true, 00:20:55.882 "data_offset": 256, 00:20:55.882 "data_size": 7936 00:20:55.882 } 00:20:55.882 ] 00:20:55.882 }' 00:20:55.882 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.882 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.141 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:56.141 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:56.141 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:56.141 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:56.141 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:56.141 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.141 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.141 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.141 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.141 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.141 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:56.141 "name": "raid_bdev1", 00:20:56.141 "uuid": "8734f1a5-ecba-4893-b393-c05452f64899", 00:20:56.141 "strip_size_kb": 0, 00:20:56.141 "state": "online", 00:20:56.141 "raid_level": "raid1", 00:20:56.141 "superblock": true, 00:20:56.141 "num_base_bdevs": 2, 00:20:56.141 "num_base_bdevs_discovered": 1, 00:20:56.141 "num_base_bdevs_operational": 1, 00:20:56.141 "base_bdevs_list": [ 00:20:56.141 { 00:20:56.141 "name": null, 00:20:56.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.141 "is_configured": false, 00:20:56.141 "data_offset": 0, 00:20:56.141 "data_size": 7936 00:20:56.141 }, 00:20:56.141 { 00:20:56.141 "name": "BaseBdev2", 00:20:56.141 "uuid": "2930ab35-93b0-50ca-a290-37e546403d32", 00:20:56.141 "is_configured": true, 00:20:56.141 "data_offset": 256, 00:20:56.141 "data_size": 7936 00:20:56.142 } 00:20:56.142 ] 00:20:56.142 }' 00:20:56.142 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88102 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 88102 ']' 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 88102 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88102 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:56.401 killing process with pid 88102 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88102' 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 88102 00:20:56.401 Received shutdown signal, test time was about 60.000000 seconds 00:20:56.401 00:20:56.401 Latency(us) 00:20:56.401 [2024-10-17T20:17:42.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.401 [2024-10-17T20:17:42.055Z] =================================================================================================================== 00:20:56.401 [2024-10-17T20:17:42.055Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.401 [2024-10-17 20:17:41.920894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.401 20:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 88102 00:20:56.401 [2024-10-17 20:17:41.921081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.401 [2024-10-17 20:17:41.921147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.401 [2024-10-17 20:17:41.921166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:56.660 [2024-10-17 20:17:42.219596] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:58.036 ************************************ 00:20:58.036 END TEST raid_rebuild_test_sb_md_separate 00:20:58.036 ************************************ 00:20:58.036 20:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:20:58.036 00:20:58.036 real 0m21.381s 00:20:58.036 user 0m28.973s 00:20:58.036 sys 0m2.424s 00:20:58.036 20:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:58.036 20:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.036 20:17:43 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:20:58.036 20:17:43 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:58.036 20:17:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:58.036 20:17:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:58.036 20:17:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.036 ************************************ 00:20:58.036 START TEST raid_state_function_test_sb_md_interleaved 00:20:58.036 ************************************ 00:20:58.036 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:20:58.036 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88810 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88810' 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:58.037 Process raid pid: 88810 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88810 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88810 ']' 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.037 20:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:58.037 [2024-10-17 20:17:43.441180] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:20:58.037 [2024-10-17 20:17:43.441586] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.037 [2024-10-17 20:17:43.606067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.296 [2024-10-17 20:17:43.740603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.554 [2024-10-17 20:17:43.950449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.554 [2024-10-17 20:17:43.950490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:58.814 [2024-10-17 20:17:44.416879] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:58.814 [2024-10-17 20:17:44.416964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:58.814 [2024-10-17 20:17:44.416979] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:58.814 [2024-10-17 20:17:44.417019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:58.814 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.073 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.073 "name": "Existed_Raid", 00:20:59.073 "uuid": "a234fa9c-6e47-4f7e-a493-93df9bbcab9a", 00:20:59.073 "strip_size_kb": 0, 00:20:59.073 "state": "configuring", 00:20:59.073 "raid_level": "raid1", 00:20:59.073 "superblock": true, 00:20:59.073 "num_base_bdevs": 2, 00:20:59.073 "num_base_bdevs_discovered": 0, 00:20:59.073 "num_base_bdevs_operational": 2, 00:20:59.073 "base_bdevs_list": [ 00:20:59.073 { 00:20:59.073 "name": "BaseBdev1", 00:20:59.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.073 "is_configured": false, 00:20:59.073 "data_offset": 0, 00:20:59.073 "data_size": 0 00:20:59.073 }, 00:20:59.073 { 00:20:59.073 "name": "BaseBdev2", 00:20:59.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.073 "is_configured": false, 00:20:59.073 "data_offset": 0, 00:20:59.073 "data_size": 0 00:20:59.073 } 00:20:59.073 ] 00:20:59.073 }' 00:20:59.073 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.073 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.332 [2024-10-17 20:17:44.925001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:59.332 [2024-10-17 20:17:44.925072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.332 [2024-10-17 20:17:44.933034] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:59.332 [2024-10-17 20:17:44.933090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:59.332 [2024-10-17 20:17:44.933104] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:59.332 [2024-10-17 20:17:44.933123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.332 [2024-10-17 20:17:44.978315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.332 BaseBdev1 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.332 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.591 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.591 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:59.591 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.591 20:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.591 [ 00:20:59.591 { 00:20:59.591 "name": "BaseBdev1", 00:20:59.591 "aliases": [ 00:20:59.591 "6c91278a-bdfb-4491-8a1a-59a322201021" 00:20:59.591 ], 00:20:59.591 "product_name": "Malloc disk", 00:20:59.591 "block_size": 4128, 00:20:59.591 "num_blocks": 8192, 00:20:59.591 "uuid": "6c91278a-bdfb-4491-8a1a-59a322201021", 00:20:59.591 "md_size": 32, 00:20:59.591 "md_interleave": true, 00:20:59.591 "dif_type": 0, 00:20:59.591 "assigned_rate_limits": { 00:20:59.591 "rw_ios_per_sec": 0, 00:20:59.591 "rw_mbytes_per_sec": 0, 00:20:59.591 "r_mbytes_per_sec": 0, 00:20:59.591 "w_mbytes_per_sec": 0 00:20:59.591 }, 00:20:59.591 "claimed": true, 00:20:59.591 "claim_type": "exclusive_write", 00:20:59.591 "zoned": false, 00:20:59.591 "supported_io_types": { 00:20:59.591 "read": true, 00:20:59.591 "write": true, 00:20:59.591 "unmap": true, 00:20:59.591 "flush": true, 00:20:59.591 "reset": true, 00:20:59.591 "nvme_admin": false, 00:20:59.591 "nvme_io": false, 00:20:59.591 "nvme_io_md": false, 00:20:59.591 "write_zeroes": true, 00:20:59.591 "zcopy": true, 00:20:59.591 "get_zone_info": false, 00:20:59.591 "zone_management": false, 00:20:59.591 "zone_append": false, 00:20:59.591 "compare": false, 00:20:59.591 "compare_and_write": false, 00:20:59.592 "abort": true, 00:20:59.592 "seek_hole": false, 00:20:59.592 "seek_data": false, 00:20:59.592 "copy": true, 00:20:59.592 "nvme_iov_md": false 00:20:59.592 }, 00:20:59.592 "memory_domains": [ 00:20:59.592 { 00:20:59.592 "dma_device_id": "system", 00:20:59.592 "dma_device_type": 1 00:20:59.592 }, 00:20:59.592 { 00:20:59.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.592 "dma_device_type": 2 00:20:59.592 } 00:20:59.592 ], 00:20:59.592 "driver_specific": {} 00:20:59.592 } 00:20:59.592 ] 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.592 "name": "Existed_Raid", 00:20:59.592 "uuid": "0c1c2274-7c33-44f1-874a-e69075a52402", 00:20:59.592 "strip_size_kb": 0, 00:20:59.592 "state": "configuring", 00:20:59.592 "raid_level": "raid1", 00:20:59.592 "superblock": true, 00:20:59.592 "num_base_bdevs": 2, 00:20:59.592 "num_base_bdevs_discovered": 1, 00:20:59.592 "num_base_bdevs_operational": 2, 00:20:59.592 "base_bdevs_list": [ 00:20:59.592 { 00:20:59.592 "name": "BaseBdev1", 00:20:59.592 "uuid": "6c91278a-bdfb-4491-8a1a-59a322201021", 00:20:59.592 "is_configured": true, 00:20:59.592 "data_offset": 256, 00:20:59.592 "data_size": 7936 00:20:59.592 }, 00:20:59.592 { 00:20:59.592 "name": "BaseBdev2", 00:20:59.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.592 "is_configured": false, 00:20:59.592 "data_offset": 0, 00:20:59.592 "data_size": 0 00:20:59.592 } 00:20:59.592 ] 00:20:59.592 }' 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.592 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.850 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:59.850 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.850 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.109 [2024-10-17 20:17:45.506578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:00.109 [2024-10-17 20:17:45.506650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.109 [2024-10-17 20:17:45.514610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:00.109 [2024-10-17 20:17:45.517151] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:00.109 [2024-10-17 20:17:45.517204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.109 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.109 "name": "Existed_Raid", 00:21:00.109 "uuid": "80acd086-6182-4338-8e25-011c39a18cf9", 00:21:00.109 "strip_size_kb": 0, 00:21:00.109 "state": "configuring", 00:21:00.109 "raid_level": "raid1", 00:21:00.109 "superblock": true, 00:21:00.109 "num_base_bdevs": 2, 00:21:00.110 "num_base_bdevs_discovered": 1, 00:21:00.110 "num_base_bdevs_operational": 2, 00:21:00.110 "base_bdevs_list": [ 00:21:00.110 { 00:21:00.110 "name": "BaseBdev1", 00:21:00.110 "uuid": "6c91278a-bdfb-4491-8a1a-59a322201021", 00:21:00.110 "is_configured": true, 00:21:00.110 "data_offset": 256, 00:21:00.110 "data_size": 7936 00:21:00.110 }, 00:21:00.110 { 00:21:00.110 "name": "BaseBdev2", 00:21:00.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.110 "is_configured": false, 00:21:00.110 "data_offset": 0, 00:21:00.110 "data_size": 0 00:21:00.110 } 00:21:00.110 ] 00:21:00.110 }' 00:21:00.110 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.110 20:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.677 [2024-10-17 20:17:46.065584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:00.677 [2024-10-17 20:17:46.065990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:00.677 [2024-10-17 20:17:46.066029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:00.677 [2024-10-17 20:17:46.066135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:00.677 [2024-10-17 20:17:46.066244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:00.677 [2024-10-17 20:17:46.066263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:00.677 BaseBdev2 00:21:00.677 [2024-10-17 20:17:46.066369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.677 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.677 [ 00:21:00.677 { 00:21:00.677 "name": "BaseBdev2", 00:21:00.677 "aliases": [ 00:21:00.677 "79684baf-e989-4c39-8505-cc948b62530b" 00:21:00.677 ], 00:21:00.677 "product_name": "Malloc disk", 00:21:00.677 "block_size": 4128, 00:21:00.677 "num_blocks": 8192, 00:21:00.677 "uuid": "79684baf-e989-4c39-8505-cc948b62530b", 00:21:00.678 "md_size": 32, 00:21:00.678 "md_interleave": true, 00:21:00.678 "dif_type": 0, 00:21:00.678 "assigned_rate_limits": { 00:21:00.678 "rw_ios_per_sec": 0, 00:21:00.678 "rw_mbytes_per_sec": 0, 00:21:00.678 "r_mbytes_per_sec": 0, 00:21:00.678 "w_mbytes_per_sec": 0 00:21:00.678 }, 00:21:00.678 "claimed": true, 00:21:00.678 "claim_type": "exclusive_write", 00:21:00.678 "zoned": false, 00:21:00.678 "supported_io_types": { 00:21:00.678 "read": true, 00:21:00.678 "write": true, 00:21:00.678 "unmap": true, 00:21:00.678 "flush": true, 00:21:00.678 "reset": true, 00:21:00.678 "nvme_admin": false, 00:21:00.678 "nvme_io": false, 00:21:00.678 "nvme_io_md": false, 00:21:00.678 "write_zeroes": true, 00:21:00.678 "zcopy": true, 00:21:00.678 "get_zone_info": false, 00:21:00.678 "zone_management": false, 00:21:00.678 "zone_append": false, 00:21:00.678 "compare": false, 00:21:00.678 "compare_and_write": false, 00:21:00.678 "abort": true, 00:21:00.678 "seek_hole": false, 00:21:00.678 "seek_data": false, 00:21:00.678 "copy": true, 00:21:00.678 "nvme_iov_md": false 00:21:00.678 }, 00:21:00.678 "memory_domains": [ 00:21:00.678 { 00:21:00.678 "dma_device_id": "system", 00:21:00.678 "dma_device_type": 1 00:21:00.678 }, 00:21:00.678 { 00:21:00.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.678 "dma_device_type": 2 00:21:00.678 } 00:21:00.678 ], 00:21:00.678 "driver_specific": {} 00:21:00.678 } 00:21:00.678 ] 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.678 "name": "Existed_Raid", 00:21:00.678 "uuid": "80acd086-6182-4338-8e25-011c39a18cf9", 00:21:00.678 "strip_size_kb": 0, 00:21:00.678 "state": "online", 00:21:00.678 "raid_level": "raid1", 00:21:00.678 "superblock": true, 00:21:00.678 "num_base_bdevs": 2, 00:21:00.678 "num_base_bdevs_discovered": 2, 00:21:00.678 "num_base_bdevs_operational": 2, 00:21:00.678 "base_bdevs_list": [ 00:21:00.678 { 00:21:00.678 "name": "BaseBdev1", 00:21:00.678 "uuid": "6c91278a-bdfb-4491-8a1a-59a322201021", 00:21:00.678 "is_configured": true, 00:21:00.678 "data_offset": 256, 00:21:00.678 "data_size": 7936 00:21:00.678 }, 00:21:00.678 { 00:21:00.678 "name": "BaseBdev2", 00:21:00.678 "uuid": "79684baf-e989-4c39-8505-cc948b62530b", 00:21:00.678 "is_configured": true, 00:21:00.678 "data_offset": 256, 00:21:00.678 "data_size": 7936 00:21:00.678 } 00:21:00.678 ] 00:21:00.678 }' 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.678 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.246 [2024-10-17 20:17:46.618200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:01.246 "name": "Existed_Raid", 00:21:01.246 "aliases": [ 00:21:01.246 "80acd086-6182-4338-8e25-011c39a18cf9" 00:21:01.246 ], 00:21:01.246 "product_name": "Raid Volume", 00:21:01.246 "block_size": 4128, 00:21:01.246 "num_blocks": 7936, 00:21:01.246 "uuid": "80acd086-6182-4338-8e25-011c39a18cf9", 00:21:01.246 "md_size": 32, 00:21:01.246 "md_interleave": true, 00:21:01.246 "dif_type": 0, 00:21:01.246 "assigned_rate_limits": { 00:21:01.246 "rw_ios_per_sec": 0, 00:21:01.246 "rw_mbytes_per_sec": 0, 00:21:01.246 "r_mbytes_per_sec": 0, 00:21:01.246 "w_mbytes_per_sec": 0 00:21:01.246 }, 00:21:01.246 "claimed": false, 00:21:01.246 "zoned": false, 00:21:01.246 "supported_io_types": { 00:21:01.246 "read": true, 00:21:01.246 "write": true, 00:21:01.246 "unmap": false, 00:21:01.246 "flush": false, 00:21:01.246 "reset": true, 00:21:01.246 "nvme_admin": false, 00:21:01.246 "nvme_io": false, 00:21:01.246 "nvme_io_md": false, 00:21:01.246 "write_zeroes": true, 00:21:01.246 "zcopy": false, 00:21:01.246 "get_zone_info": false, 00:21:01.246 "zone_management": false, 00:21:01.246 "zone_append": false, 00:21:01.246 "compare": false, 00:21:01.246 "compare_and_write": false, 00:21:01.246 "abort": false, 00:21:01.246 "seek_hole": false, 00:21:01.246 "seek_data": false, 00:21:01.246 "copy": false, 00:21:01.246 "nvme_iov_md": false 00:21:01.246 }, 00:21:01.246 "memory_domains": [ 00:21:01.246 { 00:21:01.246 "dma_device_id": "system", 00:21:01.246 "dma_device_type": 1 00:21:01.246 }, 00:21:01.246 { 00:21:01.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.246 "dma_device_type": 2 00:21:01.246 }, 00:21:01.246 { 00:21:01.246 "dma_device_id": "system", 00:21:01.246 "dma_device_type": 1 00:21:01.246 }, 00:21:01.246 { 00:21:01.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.246 "dma_device_type": 2 00:21:01.246 } 00:21:01.246 ], 00:21:01.246 "driver_specific": { 00:21:01.246 "raid": { 00:21:01.246 "uuid": "80acd086-6182-4338-8e25-011c39a18cf9", 00:21:01.246 "strip_size_kb": 0, 00:21:01.246 "state": "online", 00:21:01.246 "raid_level": "raid1", 00:21:01.246 "superblock": true, 00:21:01.246 "num_base_bdevs": 2, 00:21:01.246 "num_base_bdevs_discovered": 2, 00:21:01.246 "num_base_bdevs_operational": 2, 00:21:01.246 "base_bdevs_list": [ 00:21:01.246 { 00:21:01.246 "name": "BaseBdev1", 00:21:01.246 "uuid": "6c91278a-bdfb-4491-8a1a-59a322201021", 00:21:01.246 "is_configured": true, 00:21:01.246 "data_offset": 256, 00:21:01.246 "data_size": 7936 00:21:01.246 }, 00:21:01.246 { 00:21:01.246 "name": "BaseBdev2", 00:21:01.246 "uuid": "79684baf-e989-4c39-8505-cc948b62530b", 00:21:01.246 "is_configured": true, 00:21:01.246 "data_offset": 256, 00:21:01.246 "data_size": 7936 00:21:01.246 } 00:21:01.246 ] 00:21:01.246 } 00:21:01.246 } 00:21:01.246 }' 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:01.246 BaseBdev2' 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:01.246 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:01.247 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.247 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.247 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.247 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.247 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:01.247 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:01.247 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:01.247 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.247 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.247 [2024-10-17 20:17:46.889985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.505 20:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.505 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.505 "name": "Existed_Raid", 00:21:01.505 "uuid": "80acd086-6182-4338-8e25-011c39a18cf9", 00:21:01.505 "strip_size_kb": 0, 00:21:01.505 "state": "online", 00:21:01.505 "raid_level": "raid1", 00:21:01.505 "superblock": true, 00:21:01.505 "num_base_bdevs": 2, 00:21:01.505 "num_base_bdevs_discovered": 1, 00:21:01.505 "num_base_bdevs_operational": 1, 00:21:01.505 "base_bdevs_list": [ 00:21:01.505 { 00:21:01.505 "name": null, 00:21:01.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.505 "is_configured": false, 00:21:01.505 "data_offset": 0, 00:21:01.505 "data_size": 7936 00:21:01.505 }, 00:21:01.505 { 00:21:01.505 "name": "BaseBdev2", 00:21:01.505 "uuid": "79684baf-e989-4c39-8505-cc948b62530b", 00:21:01.506 "is_configured": true, 00:21:01.506 "data_offset": 256, 00:21:01.506 "data_size": 7936 00:21:01.506 } 00:21:01.506 ] 00:21:01.506 }' 00:21:01.506 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.506 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.086 [2024-10-17 20:17:47.566528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:02.086 [2024-10-17 20:17:47.566855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:02.086 [2024-10-17 20:17:47.655067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:02.086 [2024-10-17 20:17:47.655383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:02.086 [2024-10-17 20:17:47.655419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88810 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88810 ']' 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88810 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.086 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88810 00:21:02.345 killing process with pid 88810 00:21:02.345 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:02.345 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:02.345 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88810' 00:21:02.345 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 88810 00:21:02.345 [2024-10-17 20:17:47.747703] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:02.345 20:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 88810 00:21:02.345 [2024-10-17 20:17:47.762975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:03.279 ************************************ 00:21:03.279 END TEST raid_state_function_test_sb_md_interleaved 00:21:03.279 ************************************ 00:21:03.279 20:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:21:03.279 00:21:03.279 real 0m5.481s 00:21:03.279 user 0m8.259s 00:21:03.279 sys 0m0.813s 00:21:03.279 20:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:03.279 20:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.279 20:17:48 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:21:03.280 20:17:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:03.280 20:17:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:03.280 20:17:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:03.280 ************************************ 00:21:03.280 START TEST raid_superblock_test_md_interleaved 00:21:03.280 ************************************ 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89062 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89062 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89062 ']' 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:03.280 20:17:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.538 [2024-10-17 20:17:48.972739] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:21:03.538 [2024-10-17 20:17:48.973234] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89062 ] 00:21:03.538 [2024-10-17 20:17:49.147621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.796 [2024-10-17 20:17:49.282161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.053 [2024-10-17 20:17:49.486948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:04.053 [2024-10-17 20:17:49.487021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:04.312 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:04.312 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:21:04.312 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:04.312 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:04.312 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:04.312 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:04.312 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:04.312 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:04.312 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:04.312 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:04.312 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:21:04.312 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.312 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.571 malloc1 00:21:04.571 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.571 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:04.571 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.571 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.571 [2024-10-17 20:17:49.994513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:04.571 [2024-10-17 20:17:49.994755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.571 [2024-10-17 20:17:49.994864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:04.571 [2024-10-17 20:17:49.995103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.571 [2024-10-17 20:17:49.997952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.571 [2024-10-17 20:17:49.998153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:04.571 pt1 00:21:04.571 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.571 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:04.571 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:04.571 20:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.571 malloc2 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.571 [2024-10-17 20:17:50.051603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:04.571 [2024-10-17 20:17:50.051806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.571 [2024-10-17 20:17:50.051895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:04.571 [2024-10-17 20:17:50.052144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.571 [2024-10-17 20:17:50.054771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.571 [2024-10-17 20:17:50.054815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:04.571 pt2 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.571 [2024-10-17 20:17:50.059702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:04.571 [2024-10-17 20:17:50.062478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:04.571 [2024-10-17 20:17:50.062860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:04.571 [2024-10-17 20:17:50.062980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:04.571 [2024-10-17 20:17:50.063145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:04.571 [2024-10-17 20:17:50.063357] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:04.571 [2024-10-17 20:17:50.063474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:04.571 [2024-10-17 20:17:50.063785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.571 "name": "raid_bdev1", 00:21:04.571 "uuid": "7b174727-44ec-4b0a-99e3-a45f54836b4d", 00:21:04.571 "strip_size_kb": 0, 00:21:04.571 "state": "online", 00:21:04.571 "raid_level": "raid1", 00:21:04.571 "superblock": true, 00:21:04.571 "num_base_bdevs": 2, 00:21:04.571 "num_base_bdevs_discovered": 2, 00:21:04.571 "num_base_bdevs_operational": 2, 00:21:04.571 "base_bdevs_list": [ 00:21:04.571 { 00:21:04.571 "name": "pt1", 00:21:04.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:04.571 "is_configured": true, 00:21:04.571 "data_offset": 256, 00:21:04.571 "data_size": 7936 00:21:04.571 }, 00:21:04.571 { 00:21:04.571 "name": "pt2", 00:21:04.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:04.571 "is_configured": true, 00:21:04.571 "data_offset": 256, 00:21:04.571 "data_size": 7936 00:21:04.571 } 00:21:04.571 ] 00:21:04.571 }' 00:21:04.571 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.572 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:05.137 [2024-10-17 20:17:50.564340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:05.137 "name": "raid_bdev1", 00:21:05.137 "aliases": [ 00:21:05.137 "7b174727-44ec-4b0a-99e3-a45f54836b4d" 00:21:05.137 ], 00:21:05.137 "product_name": "Raid Volume", 00:21:05.137 "block_size": 4128, 00:21:05.137 "num_blocks": 7936, 00:21:05.137 "uuid": "7b174727-44ec-4b0a-99e3-a45f54836b4d", 00:21:05.137 "md_size": 32, 00:21:05.137 "md_interleave": true, 00:21:05.137 "dif_type": 0, 00:21:05.137 "assigned_rate_limits": { 00:21:05.137 "rw_ios_per_sec": 0, 00:21:05.137 "rw_mbytes_per_sec": 0, 00:21:05.137 "r_mbytes_per_sec": 0, 00:21:05.137 "w_mbytes_per_sec": 0 00:21:05.137 }, 00:21:05.137 "claimed": false, 00:21:05.137 "zoned": false, 00:21:05.137 "supported_io_types": { 00:21:05.137 "read": true, 00:21:05.137 "write": true, 00:21:05.137 "unmap": false, 00:21:05.137 "flush": false, 00:21:05.137 "reset": true, 00:21:05.137 "nvme_admin": false, 00:21:05.137 "nvme_io": false, 00:21:05.137 "nvme_io_md": false, 00:21:05.137 "write_zeroes": true, 00:21:05.137 "zcopy": false, 00:21:05.137 "get_zone_info": false, 00:21:05.137 "zone_management": false, 00:21:05.137 "zone_append": false, 00:21:05.137 "compare": false, 00:21:05.137 "compare_and_write": false, 00:21:05.137 "abort": false, 00:21:05.137 "seek_hole": false, 00:21:05.137 "seek_data": false, 00:21:05.137 "copy": false, 00:21:05.137 "nvme_iov_md": false 00:21:05.137 }, 00:21:05.137 "memory_domains": [ 00:21:05.137 { 00:21:05.137 "dma_device_id": "system", 00:21:05.137 "dma_device_type": 1 00:21:05.137 }, 00:21:05.137 { 00:21:05.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.137 "dma_device_type": 2 00:21:05.137 }, 00:21:05.137 { 00:21:05.137 "dma_device_id": "system", 00:21:05.137 "dma_device_type": 1 00:21:05.137 }, 00:21:05.137 { 00:21:05.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.137 "dma_device_type": 2 00:21:05.137 } 00:21:05.137 ], 00:21:05.137 "driver_specific": { 00:21:05.137 "raid": { 00:21:05.137 "uuid": "7b174727-44ec-4b0a-99e3-a45f54836b4d", 00:21:05.137 "strip_size_kb": 0, 00:21:05.137 "state": "online", 00:21:05.137 "raid_level": "raid1", 00:21:05.137 "superblock": true, 00:21:05.137 "num_base_bdevs": 2, 00:21:05.137 "num_base_bdevs_discovered": 2, 00:21:05.137 "num_base_bdevs_operational": 2, 00:21:05.137 "base_bdevs_list": [ 00:21:05.137 { 00:21:05.137 "name": "pt1", 00:21:05.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:05.137 "is_configured": true, 00:21:05.137 "data_offset": 256, 00:21:05.137 "data_size": 7936 00:21:05.137 }, 00:21:05.137 { 00:21:05.137 "name": "pt2", 00:21:05.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:05.137 "is_configured": true, 00:21:05.137 "data_offset": 256, 00:21:05.137 "data_size": 7936 00:21:05.137 } 00:21:05.137 ] 00:21:05.137 } 00:21:05.137 } 00:21:05.137 }' 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:05.137 pt2' 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.137 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.396 [2024-10-17 20:17:50.816394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7b174727-44ec-4b0a-99e3-a45f54836b4d 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 7b174727-44ec-4b0a-99e3-a45f54836b4d ']' 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.396 [2024-10-17 20:17:50.864006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.396 [2024-10-17 20:17:50.864168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:05.396 [2024-10-17 20:17:50.864416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.396 [2024-10-17 20:17:50.864603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.396 [2024-10-17 20:17:50.864748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.396 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:05.397 20:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.397 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:05.397 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.397 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.397 [2024-10-17 20:17:51.008114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:05.397 [2024-10-17 20:17:51.010834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:05.397 [2024-10-17 20:17:51.011084] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:05.397 [2024-10-17 20:17:51.011171] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:05.397 [2024-10-17 20:17:51.011200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.397 [2024-10-17 20:17:51.011216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:05.397 request: 00:21:05.397 { 00:21:05.397 "name": "raid_bdev1", 00:21:05.397 "raid_level": "raid1", 00:21:05.397 "base_bdevs": [ 00:21:05.397 "malloc1", 00:21:05.397 "malloc2" 00:21:05.397 ], 00:21:05.397 "superblock": false, 00:21:05.397 "method": "bdev_raid_create", 00:21:05.397 "req_id": 1 00:21:05.397 } 00:21:05.397 Got JSON-RPC error response 00:21:05.397 response: 00:21:05.397 { 00:21:05.397 "code": -17, 00:21:05.397 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:05.397 } 00:21:05.397 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:05.397 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:21:05.397 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:05.397 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:05.397 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:05.397 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.397 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.397 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.397 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:05.397 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.656 [2024-10-17 20:17:51.064065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:05.656 [2024-10-17 20:17:51.064278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.656 [2024-10-17 20:17:51.064350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:05.656 [2024-10-17 20:17:51.064463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.656 [2024-10-17 20:17:51.067010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.656 [2024-10-17 20:17:51.067162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:05.656 [2024-10-17 20:17:51.067249] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:05.656 [2024-10-17 20:17:51.067357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:05.656 pt1 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.656 "name": "raid_bdev1", 00:21:05.656 "uuid": "7b174727-44ec-4b0a-99e3-a45f54836b4d", 00:21:05.656 "strip_size_kb": 0, 00:21:05.656 "state": "configuring", 00:21:05.656 "raid_level": "raid1", 00:21:05.656 "superblock": true, 00:21:05.656 "num_base_bdevs": 2, 00:21:05.656 "num_base_bdevs_discovered": 1, 00:21:05.656 "num_base_bdevs_operational": 2, 00:21:05.656 "base_bdevs_list": [ 00:21:05.656 { 00:21:05.656 "name": "pt1", 00:21:05.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:05.656 "is_configured": true, 00:21:05.656 "data_offset": 256, 00:21:05.656 "data_size": 7936 00:21:05.656 }, 00:21:05.656 { 00:21:05.656 "name": null, 00:21:05.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:05.656 "is_configured": false, 00:21:05.656 "data_offset": 256, 00:21:05.656 "data_size": 7936 00:21:05.656 } 00:21:05.656 ] 00:21:05.656 }' 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.656 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.223 [2024-10-17 20:17:51.596196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:06.223 [2024-10-17 20:17:51.596294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.223 [2024-10-17 20:17:51.596329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:06.223 [2024-10-17 20:17:51.596346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.223 [2024-10-17 20:17:51.596561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.223 [2024-10-17 20:17:51.596591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:06.223 [2024-10-17 20:17:51.596657] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:06.223 [2024-10-17 20:17:51.596692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:06.223 [2024-10-17 20:17:51.596814] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:06.223 [2024-10-17 20:17:51.596835] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:06.223 [2024-10-17 20:17:51.596929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:06.223 [2024-10-17 20:17:51.597039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:06.223 [2024-10-17 20:17:51.597072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:06.223 [2024-10-17 20:17:51.597164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.223 pt2 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.223 "name": "raid_bdev1", 00:21:06.223 "uuid": "7b174727-44ec-4b0a-99e3-a45f54836b4d", 00:21:06.223 "strip_size_kb": 0, 00:21:06.223 "state": "online", 00:21:06.223 "raid_level": "raid1", 00:21:06.223 "superblock": true, 00:21:06.223 "num_base_bdevs": 2, 00:21:06.223 "num_base_bdevs_discovered": 2, 00:21:06.223 "num_base_bdevs_operational": 2, 00:21:06.223 "base_bdevs_list": [ 00:21:06.223 { 00:21:06.223 "name": "pt1", 00:21:06.223 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:06.223 "is_configured": true, 00:21:06.223 "data_offset": 256, 00:21:06.223 "data_size": 7936 00:21:06.223 }, 00:21:06.223 { 00:21:06.223 "name": "pt2", 00:21:06.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:06.223 "is_configured": true, 00:21:06.223 "data_offset": 256, 00:21:06.223 "data_size": 7936 00:21:06.223 } 00:21:06.223 ] 00:21:06.223 }' 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.223 20:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.482 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:06.482 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:06.482 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:06.482 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:06.482 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:06.482 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:06.482 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:06.482 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.482 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:06.482 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.482 [2024-10-17 20:17:52.096703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.482 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.741 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:06.741 "name": "raid_bdev1", 00:21:06.741 "aliases": [ 00:21:06.741 "7b174727-44ec-4b0a-99e3-a45f54836b4d" 00:21:06.741 ], 00:21:06.742 "product_name": "Raid Volume", 00:21:06.742 "block_size": 4128, 00:21:06.742 "num_blocks": 7936, 00:21:06.742 "uuid": "7b174727-44ec-4b0a-99e3-a45f54836b4d", 00:21:06.742 "md_size": 32, 00:21:06.742 "md_interleave": true, 00:21:06.742 "dif_type": 0, 00:21:06.742 "assigned_rate_limits": { 00:21:06.742 "rw_ios_per_sec": 0, 00:21:06.742 "rw_mbytes_per_sec": 0, 00:21:06.742 "r_mbytes_per_sec": 0, 00:21:06.742 "w_mbytes_per_sec": 0 00:21:06.742 }, 00:21:06.742 "claimed": false, 00:21:06.742 "zoned": false, 00:21:06.742 "supported_io_types": { 00:21:06.742 "read": true, 00:21:06.742 "write": true, 00:21:06.742 "unmap": false, 00:21:06.742 "flush": false, 00:21:06.742 "reset": true, 00:21:06.742 "nvme_admin": false, 00:21:06.742 "nvme_io": false, 00:21:06.742 "nvme_io_md": false, 00:21:06.742 "write_zeroes": true, 00:21:06.742 "zcopy": false, 00:21:06.742 "get_zone_info": false, 00:21:06.742 "zone_management": false, 00:21:06.742 "zone_append": false, 00:21:06.742 "compare": false, 00:21:06.742 "compare_and_write": false, 00:21:06.742 "abort": false, 00:21:06.742 "seek_hole": false, 00:21:06.742 "seek_data": false, 00:21:06.742 "copy": false, 00:21:06.742 "nvme_iov_md": false 00:21:06.742 }, 00:21:06.742 "memory_domains": [ 00:21:06.742 { 00:21:06.742 "dma_device_id": "system", 00:21:06.742 "dma_device_type": 1 00:21:06.742 }, 00:21:06.742 { 00:21:06.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.742 "dma_device_type": 2 00:21:06.742 }, 00:21:06.742 { 00:21:06.742 "dma_device_id": "system", 00:21:06.742 "dma_device_type": 1 00:21:06.742 }, 00:21:06.742 { 00:21:06.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.742 "dma_device_type": 2 00:21:06.742 } 00:21:06.742 ], 00:21:06.742 "driver_specific": { 00:21:06.742 "raid": { 00:21:06.742 "uuid": "7b174727-44ec-4b0a-99e3-a45f54836b4d", 00:21:06.742 "strip_size_kb": 0, 00:21:06.742 "state": "online", 00:21:06.742 "raid_level": "raid1", 00:21:06.742 "superblock": true, 00:21:06.742 "num_base_bdevs": 2, 00:21:06.742 "num_base_bdevs_discovered": 2, 00:21:06.742 "num_base_bdevs_operational": 2, 00:21:06.742 "base_bdevs_list": [ 00:21:06.742 { 00:21:06.742 "name": "pt1", 00:21:06.742 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:06.742 "is_configured": true, 00:21:06.742 "data_offset": 256, 00:21:06.742 "data_size": 7936 00:21:06.742 }, 00:21:06.742 { 00:21:06.742 "name": "pt2", 00:21:06.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:06.742 "is_configured": true, 00:21:06.742 "data_offset": 256, 00:21:06.742 "data_size": 7936 00:21:06.742 } 00:21:06.742 ] 00:21:06.742 } 00:21:06.742 } 00:21:06.742 }' 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:06.742 pt2' 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.742 [2024-10-17 20:17:52.348740] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.742 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 7b174727-44ec-4b0a-99e3-a45f54836b4d '!=' 7b174727-44ec-4b0a-99e3-a45f54836b4d ']' 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.001 [2024-10-17 20:17:52.400517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.001 "name": "raid_bdev1", 00:21:07.001 "uuid": "7b174727-44ec-4b0a-99e3-a45f54836b4d", 00:21:07.001 "strip_size_kb": 0, 00:21:07.001 "state": "online", 00:21:07.001 "raid_level": "raid1", 00:21:07.001 "superblock": true, 00:21:07.001 "num_base_bdevs": 2, 00:21:07.001 "num_base_bdevs_discovered": 1, 00:21:07.001 "num_base_bdevs_operational": 1, 00:21:07.001 "base_bdevs_list": [ 00:21:07.001 { 00:21:07.001 "name": null, 00:21:07.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.001 "is_configured": false, 00:21:07.001 "data_offset": 0, 00:21:07.001 "data_size": 7936 00:21:07.001 }, 00:21:07.001 { 00:21:07.001 "name": "pt2", 00:21:07.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:07.001 "is_configured": true, 00:21:07.001 "data_offset": 256, 00:21:07.001 "data_size": 7936 00:21:07.001 } 00:21:07.001 ] 00:21:07.001 }' 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.001 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.260 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:07.260 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.260 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.260 [2024-10-17 20:17:52.908571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.260 [2024-10-17 20:17:52.908606] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:07.260 [2024-10-17 20:17:52.908703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:07.260 [2024-10-17 20:17:52.908770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:07.260 [2024-10-17 20:17:52.908792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.519 [2024-10-17 20:17:52.984585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:07.519 [2024-10-17 20:17:52.984851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.519 [2024-10-17 20:17:52.984920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:07.519 [2024-10-17 20:17:52.985070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.519 [2024-10-17 20:17:52.988182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.519 [2024-10-17 20:17:52.988357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:07.519 [2024-10-17 20:17:52.988545] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:07.519 [2024-10-17 20:17:52.988732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:07.519 [2024-10-17 20:17:52.989009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:07.519 pt2 00:21:07.519 [2024-10-17 20:17:52.989136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:07.519 [2024-10-17 20:17:52.989294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:07.519 [2024-10-17 20:17:52.989435] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:07.519 [2024-10-17 20:17:52.989519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.519 _bdev 0x617000008200 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:07.519 [2024-10-17 20:17:52.989741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.519 20:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.519 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.519 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.519 "name": "raid_bdev1", 00:21:07.519 "uuid": "7b174727-44ec-4b0a-99e3-a45f54836b4d", 00:21:07.519 "strip_size_kb": 0, 00:21:07.519 "state": "online", 00:21:07.519 "raid_level": "raid1", 00:21:07.519 "superblock": true, 00:21:07.519 "num_base_bdevs": 2, 00:21:07.519 "num_base_bdevs_discovered": 1, 00:21:07.519 "num_base_bdevs_operational": 1, 00:21:07.519 "base_bdevs_list": [ 00:21:07.519 { 00:21:07.519 "name": null, 00:21:07.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.519 "is_configured": false, 00:21:07.519 "data_offset": 256, 00:21:07.519 "data_size": 7936 00:21:07.519 }, 00:21:07.519 { 00:21:07.519 "name": "pt2", 00:21:07.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:07.519 "is_configured": true, 00:21:07.519 "data_offset": 256, 00:21:07.519 "data_size": 7936 00:21:07.519 } 00:21:07.519 ] 00:21:07.519 }' 00:21:07.519 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.519 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.087 [2024-10-17 20:17:53.504850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.087 [2024-10-17 20:17:53.505032] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:08.087 [2024-10-17 20:17:53.505237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:08.087 [2024-10-17 20:17:53.505322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:08.087 [2024-10-17 20:17:53.505340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.087 [2024-10-17 20:17:53.568885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:08.087 [2024-10-17 20:17:53.569118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.087 [2024-10-17 20:17:53.569195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:08.087 [2024-10-17 20:17:53.569450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.087 [2024-10-17 20:17:53.572393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.087 [2024-10-17 20:17:53.572439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:08.087 [2024-10-17 20:17:53.572517] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:08.087 [2024-10-17 20:17:53.572577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:08.087 [2024-10-17 20:17:53.572709] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:08.087 [2024-10-17 20:17:53.572727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.087 [2024-10-17 20:17:53.572753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:08.087 [2024-10-17 20:17:53.572838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:08.087 [2024-10-17 20:17:53.572938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:08.087 [2024-10-17 20:17:53.572970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:08.087 [2024-10-17 20:17:53.573065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:08.087 [2024-10-17 20:17:53.573156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:08.087 [2024-10-17 20:17:53.573192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:08.087 pt1 00:21:08.087 [2024-10-17 20:17:53.573340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.087 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.087 "name": "raid_bdev1", 00:21:08.087 "uuid": "7b174727-44ec-4b0a-99e3-a45f54836b4d", 00:21:08.087 "strip_size_kb": 0, 00:21:08.087 "state": "online", 00:21:08.087 "raid_level": "raid1", 00:21:08.087 "superblock": true, 00:21:08.087 "num_base_bdevs": 2, 00:21:08.087 "num_base_bdevs_discovered": 1, 00:21:08.087 "num_base_bdevs_operational": 1, 00:21:08.087 "base_bdevs_list": [ 00:21:08.087 { 00:21:08.088 "name": null, 00:21:08.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.088 "is_configured": false, 00:21:08.088 "data_offset": 256, 00:21:08.088 "data_size": 7936 00:21:08.088 }, 00:21:08.088 { 00:21:08.088 "name": "pt2", 00:21:08.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:08.088 "is_configured": true, 00:21:08.088 "data_offset": 256, 00:21:08.088 "data_size": 7936 00:21:08.088 } 00:21:08.088 ] 00:21:08.088 }' 00:21:08.088 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.088 20:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.654 [2024-10-17 20:17:54.145553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 7b174727-44ec-4b0a-99e3-a45f54836b4d '!=' 7b174727-44ec-4b0a-99e3-a45f54836b4d ']' 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89062 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89062 ']' 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89062 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:21:08.654 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:08.655 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89062 00:21:08.655 killing process with pid 89062 00:21:08.655 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:08.655 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:08.655 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89062' 00:21:08.655 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 89062 00:21:08.655 [2024-10-17 20:17:54.219611] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:08.655 20:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 89062 00:21:08.655 [2024-10-17 20:17:54.219724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:08.655 [2024-10-17 20:17:54.219806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:08.655 [2024-10-17 20:17:54.219833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:08.913 [2024-10-17 20:17:54.409847] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:09.874 ************************************ 00:21:09.874 END TEST raid_superblock_test_md_interleaved 00:21:09.874 ************************************ 00:21:09.874 20:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:21:09.874 00:21:09.874 real 0m6.584s 00:21:09.874 user 0m10.391s 00:21:09.874 sys 0m0.983s 00:21:09.874 20:17:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:09.874 20:17:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.874 20:17:55 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:21:09.874 20:17:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:09.874 20:17:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:09.874 20:17:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:09.874 ************************************ 00:21:09.874 START TEST raid_rebuild_test_sb_md_interleaved 00:21:09.874 ************************************ 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89393 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89393 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89393 ']' 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:09.874 20:17:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.133 [2024-10-17 20:17:55.625291] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:21:10.133 [2024-10-17 20:17:55.625780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:21:10.133 Zero copy mechanism will not be used. 00:21:10.133 -allocations --file-prefix=spdk_pid89393 ] 00:21:10.391 [2024-10-17 20:17:55.803735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.391 [2024-10-17 20:17:55.961083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.649 [2024-10-17 20:17:56.165232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:10.649 [2024-10-17 20:17:56.165414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.215 BaseBdev1_malloc 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.215 [2024-10-17 20:17:56.681532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:11.215 [2024-10-17 20:17:56.681748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.215 [2024-10-17 20:17:56.681824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:11.215 [2024-10-17 20:17:56.681962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.215 [2024-10-17 20:17:56.684872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.215 [2024-10-17 20:17:56.684922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:11.215 BaseBdev1 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.215 BaseBdev2_malloc 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.215 [2024-10-17 20:17:56.735701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:11.215 [2024-10-17 20:17:56.735923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.215 [2024-10-17 20:17:56.736008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:11.215 [2024-10-17 20:17:56.736140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.215 [2024-10-17 20:17:56.738949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.215 [2024-10-17 20:17:56.739123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:11.215 BaseBdev2 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.215 spare_malloc 00:21:11.215 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.216 spare_delay 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.216 [2024-10-17 20:17:56.809728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:11.216 [2024-10-17 20:17:56.809924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.216 [2024-10-17 20:17:56.809963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:11.216 [2024-10-17 20:17:56.809983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.216 [2024-10-17 20:17:56.812821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.216 [2024-10-17 20:17:56.812870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:11.216 spare 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.216 [2024-10-17 20:17:56.817801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:11.216 [2024-10-17 20:17:56.820943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:11.216 [2024-10-17 20:17:56.821363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:11.216 [2024-10-17 20:17:56.821504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:11.216 [2024-10-17 20:17:56.821621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:11.216 [2024-10-17 20:17:56.821732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:11.216 [2024-10-17 20:17:56.821746] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:11.216 [2024-10-17 20:17:56.821894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.216 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.474 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.474 "name": "raid_bdev1", 00:21:11.474 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:11.474 "strip_size_kb": 0, 00:21:11.474 "state": "online", 00:21:11.474 "raid_level": "raid1", 00:21:11.474 "superblock": true, 00:21:11.474 "num_base_bdevs": 2, 00:21:11.474 "num_base_bdevs_discovered": 2, 00:21:11.474 "num_base_bdevs_operational": 2, 00:21:11.474 "base_bdevs_list": [ 00:21:11.474 { 00:21:11.474 "name": "BaseBdev1", 00:21:11.474 "uuid": "75e8d4ea-64e4-5cb6-a009-12ba5a288165", 00:21:11.474 "is_configured": true, 00:21:11.474 "data_offset": 256, 00:21:11.474 "data_size": 7936 00:21:11.474 }, 00:21:11.474 { 00:21:11.474 "name": "BaseBdev2", 00:21:11.474 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:11.474 "is_configured": true, 00:21:11.474 "data_offset": 256, 00:21:11.474 "data_size": 7936 00:21:11.474 } 00:21:11.474 ] 00:21:11.474 }' 00:21:11.474 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.474 20:17:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.740 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:11.740 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:11.740 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.740 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.740 [2024-10-17 20:17:57.318466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.740 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.740 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:11.740 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.740 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:11.740 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.740 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.740 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.999 [2024-10-17 20:17:57.414142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.999 "name": "raid_bdev1", 00:21:11.999 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:11.999 "strip_size_kb": 0, 00:21:11.999 "state": "online", 00:21:11.999 "raid_level": "raid1", 00:21:11.999 "superblock": true, 00:21:11.999 "num_base_bdevs": 2, 00:21:11.999 "num_base_bdevs_discovered": 1, 00:21:11.999 "num_base_bdevs_operational": 1, 00:21:11.999 "base_bdevs_list": [ 00:21:11.999 { 00:21:11.999 "name": null, 00:21:11.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.999 "is_configured": false, 00:21:11.999 "data_offset": 0, 00:21:11.999 "data_size": 7936 00:21:11.999 }, 00:21:11.999 { 00:21:11.999 "name": "BaseBdev2", 00:21:11.999 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:11.999 "is_configured": true, 00:21:11.999 "data_offset": 256, 00:21:11.999 "data_size": 7936 00:21:11.999 } 00:21:11.999 ] 00:21:11.999 }' 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.999 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:12.566 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:12.566 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.566 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:12.566 [2024-10-17 20:17:57.926319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:12.566 [2024-10-17 20:17:57.944252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:12.566 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.566 20:17:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:12.566 [2024-10-17 20:17:57.947383] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:13.499 20:17:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.499 20:17:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.499 20:17:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:13.499 20:17:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:13.499 20:17:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.499 20:17:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.499 20:17:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.500 20:17:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.500 20:17:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.500 20:17:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.500 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.500 "name": "raid_bdev1", 00:21:13.500 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:13.500 "strip_size_kb": 0, 00:21:13.500 "state": "online", 00:21:13.500 "raid_level": "raid1", 00:21:13.500 "superblock": true, 00:21:13.500 "num_base_bdevs": 2, 00:21:13.500 "num_base_bdevs_discovered": 2, 00:21:13.500 "num_base_bdevs_operational": 2, 00:21:13.500 "process": { 00:21:13.500 "type": "rebuild", 00:21:13.500 "target": "spare", 00:21:13.500 "progress": { 00:21:13.500 "blocks": 2560, 00:21:13.500 "percent": 32 00:21:13.500 } 00:21:13.500 }, 00:21:13.500 "base_bdevs_list": [ 00:21:13.500 { 00:21:13.500 "name": "spare", 00:21:13.500 "uuid": "2d4b1db7-77b2-5219-b276-9af5fe80338a", 00:21:13.500 "is_configured": true, 00:21:13.500 "data_offset": 256, 00:21:13.500 "data_size": 7936 00:21:13.500 }, 00:21:13.500 { 00:21:13.500 "name": "BaseBdev2", 00:21:13.500 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:13.500 "is_configured": true, 00:21:13.500 "data_offset": 256, 00:21:13.500 "data_size": 7936 00:21:13.500 } 00:21:13.500 ] 00:21:13.500 }' 00:21:13.500 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.500 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.500 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.500 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.500 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:13.500 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.500 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.500 [2024-10-17 20:17:59.112590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:13.758 [2024-10-17 20:17:59.156568] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:13.758 [2024-10-17 20:17:59.156870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.758 [2024-10-17 20:17:59.156900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:13.758 [2024-10-17 20:17:59.156921] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.758 "name": "raid_bdev1", 00:21:13.758 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:13.758 "strip_size_kb": 0, 00:21:13.758 "state": "online", 00:21:13.758 "raid_level": "raid1", 00:21:13.758 "superblock": true, 00:21:13.758 "num_base_bdevs": 2, 00:21:13.758 "num_base_bdevs_discovered": 1, 00:21:13.758 "num_base_bdevs_operational": 1, 00:21:13.758 "base_bdevs_list": [ 00:21:13.758 { 00:21:13.758 "name": null, 00:21:13.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.758 "is_configured": false, 00:21:13.758 "data_offset": 0, 00:21:13.758 "data_size": 7936 00:21:13.758 }, 00:21:13.758 { 00:21:13.758 "name": "BaseBdev2", 00:21:13.758 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:13.758 "is_configured": true, 00:21:13.758 "data_offset": 256, 00:21:13.758 "data_size": 7936 00:21:13.758 } 00:21:13.758 ] 00:21:13.758 }' 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.758 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.325 "name": "raid_bdev1", 00:21:14.325 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:14.325 "strip_size_kb": 0, 00:21:14.325 "state": "online", 00:21:14.325 "raid_level": "raid1", 00:21:14.325 "superblock": true, 00:21:14.325 "num_base_bdevs": 2, 00:21:14.325 "num_base_bdevs_discovered": 1, 00:21:14.325 "num_base_bdevs_operational": 1, 00:21:14.325 "base_bdevs_list": [ 00:21:14.325 { 00:21:14.325 "name": null, 00:21:14.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.325 "is_configured": false, 00:21:14.325 "data_offset": 0, 00:21:14.325 "data_size": 7936 00:21:14.325 }, 00:21:14.325 { 00:21:14.325 "name": "BaseBdev2", 00:21:14.325 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:14.325 "is_configured": true, 00:21:14.325 "data_offset": 256, 00:21:14.325 "data_size": 7936 00:21:14.325 } 00:21:14.325 ] 00:21:14.325 }' 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.325 [2024-10-17 20:17:59.880139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:14.325 [2024-10-17 20:17:59.896086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.325 20:17:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:14.325 [2024-10-17 20:17:59.898708] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:15.259 20:18:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.259 20:18:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.259 20:18:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:15.259 20:18:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:15.259 20:18:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.259 20:18:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.259 20:18:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.259 20:18:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.259 20:18:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.518 20:18:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.518 20:18:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.518 "name": "raid_bdev1", 00:21:15.518 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:15.518 "strip_size_kb": 0, 00:21:15.518 "state": "online", 00:21:15.518 "raid_level": "raid1", 00:21:15.518 "superblock": true, 00:21:15.518 "num_base_bdevs": 2, 00:21:15.518 "num_base_bdevs_discovered": 2, 00:21:15.518 "num_base_bdevs_operational": 2, 00:21:15.518 "process": { 00:21:15.518 "type": "rebuild", 00:21:15.518 "target": "spare", 00:21:15.518 "progress": { 00:21:15.518 "blocks": 2560, 00:21:15.519 "percent": 32 00:21:15.519 } 00:21:15.519 }, 00:21:15.519 "base_bdevs_list": [ 00:21:15.519 { 00:21:15.519 "name": "spare", 00:21:15.519 "uuid": "2d4b1db7-77b2-5219-b276-9af5fe80338a", 00:21:15.519 "is_configured": true, 00:21:15.519 "data_offset": 256, 00:21:15.519 "data_size": 7936 00:21:15.519 }, 00:21:15.519 { 00:21:15.519 "name": "BaseBdev2", 00:21:15.519 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:15.519 "is_configured": true, 00:21:15.519 "data_offset": 256, 00:21:15.519 "data_size": 7936 00:21:15.519 } 00:21:15.519 ] 00:21:15.519 }' 00:21:15.519 20:18:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:15.519 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=796 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.519 "name": "raid_bdev1", 00:21:15.519 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:15.519 "strip_size_kb": 0, 00:21:15.519 "state": "online", 00:21:15.519 "raid_level": "raid1", 00:21:15.519 "superblock": true, 00:21:15.519 "num_base_bdevs": 2, 00:21:15.519 "num_base_bdevs_discovered": 2, 00:21:15.519 "num_base_bdevs_operational": 2, 00:21:15.519 "process": { 00:21:15.519 "type": "rebuild", 00:21:15.519 "target": "spare", 00:21:15.519 "progress": { 00:21:15.519 "blocks": 2816, 00:21:15.519 "percent": 35 00:21:15.519 } 00:21:15.519 }, 00:21:15.519 "base_bdevs_list": [ 00:21:15.519 { 00:21:15.519 "name": "spare", 00:21:15.519 "uuid": "2d4b1db7-77b2-5219-b276-9af5fe80338a", 00:21:15.519 "is_configured": true, 00:21:15.519 "data_offset": 256, 00:21:15.519 "data_size": 7936 00:21:15.519 }, 00:21:15.519 { 00:21:15.519 "name": "BaseBdev2", 00:21:15.519 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:15.519 "is_configured": true, 00:21:15.519 "data_offset": 256, 00:21:15.519 "data_size": 7936 00:21:15.519 } 00:21:15.519 ] 00:21:15.519 }' 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.519 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.783 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.783 20:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.717 "name": "raid_bdev1", 00:21:16.717 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:16.717 "strip_size_kb": 0, 00:21:16.717 "state": "online", 00:21:16.717 "raid_level": "raid1", 00:21:16.717 "superblock": true, 00:21:16.717 "num_base_bdevs": 2, 00:21:16.717 "num_base_bdevs_discovered": 2, 00:21:16.717 "num_base_bdevs_operational": 2, 00:21:16.717 "process": { 00:21:16.717 "type": "rebuild", 00:21:16.717 "target": "spare", 00:21:16.717 "progress": { 00:21:16.717 "blocks": 5888, 00:21:16.717 "percent": 74 00:21:16.717 } 00:21:16.717 }, 00:21:16.717 "base_bdevs_list": [ 00:21:16.717 { 00:21:16.717 "name": "spare", 00:21:16.717 "uuid": "2d4b1db7-77b2-5219-b276-9af5fe80338a", 00:21:16.717 "is_configured": true, 00:21:16.717 "data_offset": 256, 00:21:16.717 "data_size": 7936 00:21:16.717 }, 00:21:16.717 { 00:21:16.717 "name": "BaseBdev2", 00:21:16.717 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:16.717 "is_configured": true, 00:21:16.717 "data_offset": 256, 00:21:16.717 "data_size": 7936 00:21:16.717 } 00:21:16.717 ] 00:21:16.717 }' 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.717 20:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:17.652 [2024-10-17 20:18:03.021399] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:17.652 [2024-10-17 20:18:03.021520] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:17.652 [2024-10-17 20:18:03.021684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.910 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:17.910 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.910 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.910 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.910 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.910 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.910 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.910 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.910 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.910 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.910 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.910 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.910 "name": "raid_bdev1", 00:21:17.910 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:17.910 "strip_size_kb": 0, 00:21:17.910 "state": "online", 00:21:17.910 "raid_level": "raid1", 00:21:17.910 "superblock": true, 00:21:17.910 "num_base_bdevs": 2, 00:21:17.910 "num_base_bdevs_discovered": 2, 00:21:17.910 "num_base_bdevs_operational": 2, 00:21:17.910 "base_bdevs_list": [ 00:21:17.910 { 00:21:17.910 "name": "spare", 00:21:17.910 "uuid": "2d4b1db7-77b2-5219-b276-9af5fe80338a", 00:21:17.910 "is_configured": true, 00:21:17.910 "data_offset": 256, 00:21:17.910 "data_size": 7936 00:21:17.910 }, 00:21:17.910 { 00:21:17.910 "name": "BaseBdev2", 00:21:17.910 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:17.910 "is_configured": true, 00:21:17.910 "data_offset": 256, 00:21:17.910 "data_size": 7936 00:21:17.910 } 00:21:17.911 ] 00:21:17.911 }' 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.911 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.169 "name": "raid_bdev1", 00:21:18.169 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:18.169 "strip_size_kb": 0, 00:21:18.169 "state": "online", 00:21:18.169 "raid_level": "raid1", 00:21:18.169 "superblock": true, 00:21:18.169 "num_base_bdevs": 2, 00:21:18.169 "num_base_bdevs_discovered": 2, 00:21:18.169 "num_base_bdevs_operational": 2, 00:21:18.169 "base_bdevs_list": [ 00:21:18.169 { 00:21:18.169 "name": "spare", 00:21:18.169 "uuid": "2d4b1db7-77b2-5219-b276-9af5fe80338a", 00:21:18.169 "is_configured": true, 00:21:18.169 "data_offset": 256, 00:21:18.169 "data_size": 7936 00:21:18.169 }, 00:21:18.169 { 00:21:18.169 "name": "BaseBdev2", 00:21:18.169 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:18.169 "is_configured": true, 00:21:18.169 "data_offset": 256, 00:21:18.169 "data_size": 7936 00:21:18.169 } 00:21:18.169 ] 00:21:18.169 }' 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.169 "name": "raid_bdev1", 00:21:18.169 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:18.169 "strip_size_kb": 0, 00:21:18.169 "state": "online", 00:21:18.169 "raid_level": "raid1", 00:21:18.169 "superblock": true, 00:21:18.169 "num_base_bdevs": 2, 00:21:18.169 "num_base_bdevs_discovered": 2, 00:21:18.169 "num_base_bdevs_operational": 2, 00:21:18.169 "base_bdevs_list": [ 00:21:18.169 { 00:21:18.169 "name": "spare", 00:21:18.169 "uuid": "2d4b1db7-77b2-5219-b276-9af5fe80338a", 00:21:18.169 "is_configured": true, 00:21:18.169 "data_offset": 256, 00:21:18.169 "data_size": 7936 00:21:18.169 }, 00:21:18.169 { 00:21:18.169 "name": "BaseBdev2", 00:21:18.169 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:18.169 "is_configured": true, 00:21:18.169 "data_offset": 256, 00:21:18.169 "data_size": 7936 00:21:18.169 } 00:21:18.169 ] 00:21:18.169 }' 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.169 20:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.740 [2024-10-17 20:18:04.193657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:18.740 [2024-10-17 20:18:04.193848] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:18.740 [2024-10-17 20:18:04.194090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:18.740 [2024-10-17 20:18:04.194306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:18.740 [2024-10-17 20:18:04.194440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.740 [2024-10-17 20:18:04.269637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:18.740 [2024-10-17 20:18:04.269710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.740 [2024-10-17 20:18:04.269741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:18.740 [2024-10-17 20:18:04.269756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.740 [2024-10-17 20:18:04.272771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.740 [2024-10-17 20:18:04.272816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:18.740 [2024-10-17 20:18:04.272893] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:18.740 [2024-10-17 20:18:04.272953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:18.740 [2024-10-17 20:18:04.273125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:18.740 spare 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.740 [2024-10-17 20:18:04.373244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:18.740 [2024-10-17 20:18:04.373495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:18.740 [2024-10-17 20:18:04.373677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:18.740 [2024-10-17 20:18:04.373950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:18.740 [2024-10-17 20:18:04.373974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:18.740 [2024-10-17 20:18:04.374150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.740 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.741 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.741 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:18.741 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.741 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.741 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.741 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.741 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.741 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.741 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.741 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.000 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.000 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.000 "name": "raid_bdev1", 00:21:19.000 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:19.000 "strip_size_kb": 0, 00:21:19.000 "state": "online", 00:21:19.000 "raid_level": "raid1", 00:21:19.000 "superblock": true, 00:21:19.000 "num_base_bdevs": 2, 00:21:19.000 "num_base_bdevs_discovered": 2, 00:21:19.000 "num_base_bdevs_operational": 2, 00:21:19.000 "base_bdevs_list": [ 00:21:19.000 { 00:21:19.000 "name": "spare", 00:21:19.000 "uuid": "2d4b1db7-77b2-5219-b276-9af5fe80338a", 00:21:19.000 "is_configured": true, 00:21:19.000 "data_offset": 256, 00:21:19.000 "data_size": 7936 00:21:19.000 }, 00:21:19.000 { 00:21:19.000 "name": "BaseBdev2", 00:21:19.000 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:19.000 "is_configured": true, 00:21:19.000 "data_offset": 256, 00:21:19.000 "data_size": 7936 00:21:19.000 } 00:21:19.000 ] 00:21:19.000 }' 00:21:19.000 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.000 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.567 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:19.567 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.567 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:19.567 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:19.567 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.567 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.567 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.567 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.567 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.567 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.567 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.567 "name": "raid_bdev1", 00:21:19.567 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:19.567 "strip_size_kb": 0, 00:21:19.567 "state": "online", 00:21:19.567 "raid_level": "raid1", 00:21:19.567 "superblock": true, 00:21:19.567 "num_base_bdevs": 2, 00:21:19.567 "num_base_bdevs_discovered": 2, 00:21:19.567 "num_base_bdevs_operational": 2, 00:21:19.567 "base_bdevs_list": [ 00:21:19.567 { 00:21:19.567 "name": "spare", 00:21:19.567 "uuid": "2d4b1db7-77b2-5219-b276-9af5fe80338a", 00:21:19.567 "is_configured": true, 00:21:19.567 "data_offset": 256, 00:21:19.567 "data_size": 7936 00:21:19.567 }, 00:21:19.567 { 00:21:19.567 "name": "BaseBdev2", 00:21:19.567 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:19.567 "is_configured": true, 00:21:19.567 "data_offset": 256, 00:21:19.567 "data_size": 7936 00:21:19.567 } 00:21:19.567 ] 00:21:19.567 }' 00:21:19.567 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.567 20:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.567 [2024-10-17 20:18:05.114966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.567 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.567 "name": "raid_bdev1", 00:21:19.567 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:19.567 "strip_size_kb": 0, 00:21:19.567 "state": "online", 00:21:19.567 "raid_level": "raid1", 00:21:19.567 "superblock": true, 00:21:19.567 "num_base_bdevs": 2, 00:21:19.567 "num_base_bdevs_discovered": 1, 00:21:19.567 "num_base_bdevs_operational": 1, 00:21:19.567 "base_bdevs_list": [ 00:21:19.567 { 00:21:19.567 "name": null, 00:21:19.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.567 "is_configured": false, 00:21:19.568 "data_offset": 0, 00:21:19.568 "data_size": 7936 00:21:19.568 }, 00:21:19.568 { 00:21:19.568 "name": "BaseBdev2", 00:21:19.568 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:19.568 "is_configured": true, 00:21:19.568 "data_offset": 256, 00:21:19.568 "data_size": 7936 00:21:19.568 } 00:21:19.568 ] 00:21:19.568 }' 00:21:19.568 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.568 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.135 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:20.135 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.135 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.135 [2024-10-17 20:18:05.599151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.135 [2024-10-17 20:18:05.599553] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:20.135 [2024-10-17 20:18:05.599593] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:20.135 [2024-10-17 20:18:05.599642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.135 [2024-10-17 20:18:05.615253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:20.135 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.135 20:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:20.135 [2024-10-17 20:18:05.617870] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.079 "name": "raid_bdev1", 00:21:21.079 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:21.079 "strip_size_kb": 0, 00:21:21.079 "state": "online", 00:21:21.079 "raid_level": "raid1", 00:21:21.079 "superblock": true, 00:21:21.079 "num_base_bdevs": 2, 00:21:21.079 "num_base_bdevs_discovered": 2, 00:21:21.079 "num_base_bdevs_operational": 2, 00:21:21.079 "process": { 00:21:21.079 "type": "rebuild", 00:21:21.079 "target": "spare", 00:21:21.079 "progress": { 00:21:21.079 "blocks": 2560, 00:21:21.079 "percent": 32 00:21:21.079 } 00:21:21.079 }, 00:21:21.079 "base_bdevs_list": [ 00:21:21.079 { 00:21:21.079 "name": "spare", 00:21:21.079 "uuid": "2d4b1db7-77b2-5219-b276-9af5fe80338a", 00:21:21.079 "is_configured": true, 00:21:21.079 "data_offset": 256, 00:21:21.079 "data_size": 7936 00:21:21.079 }, 00:21:21.079 { 00:21:21.079 "name": "BaseBdev2", 00:21:21.079 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:21.079 "is_configured": true, 00:21:21.079 "data_offset": 256, 00:21:21.079 "data_size": 7936 00:21:21.079 } 00:21:21.079 ] 00:21:21.079 }' 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.079 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.339 [2024-10-17 20:18:06.770804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.339 [2024-10-17 20:18:06.826687] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:21.339 [2024-10-17 20:18:06.826788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.339 [2024-10-17 20:18:06.826815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.339 [2024-10-17 20:18:06.826830] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.339 "name": "raid_bdev1", 00:21:21.339 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:21.339 "strip_size_kb": 0, 00:21:21.339 "state": "online", 00:21:21.339 "raid_level": "raid1", 00:21:21.339 "superblock": true, 00:21:21.339 "num_base_bdevs": 2, 00:21:21.339 "num_base_bdevs_discovered": 1, 00:21:21.339 "num_base_bdevs_operational": 1, 00:21:21.339 "base_bdevs_list": [ 00:21:21.339 { 00:21:21.339 "name": null, 00:21:21.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.339 "is_configured": false, 00:21:21.339 "data_offset": 0, 00:21:21.339 "data_size": 7936 00:21:21.339 }, 00:21:21.339 { 00:21:21.339 "name": "BaseBdev2", 00:21:21.339 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:21.339 "is_configured": true, 00:21:21.339 "data_offset": 256, 00:21:21.339 "data_size": 7936 00:21:21.339 } 00:21:21.339 ] 00:21:21.339 }' 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.339 20:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.906 20:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:21.906 20:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.906 20:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.906 [2024-10-17 20:18:07.402674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:21.906 [2024-10-17 20:18:07.402771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.906 [2024-10-17 20:18:07.402808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:21.906 [2024-10-17 20:18:07.402827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.907 [2024-10-17 20:18:07.403100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.907 [2024-10-17 20:18:07.403137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:21.907 [2024-10-17 20:18:07.403222] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:21.907 [2024-10-17 20:18:07.403245] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:21.907 [2024-10-17 20:18:07.403260] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:21.907 [2024-10-17 20:18:07.403292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:21.907 spare 00:21:21.907 [2024-10-17 20:18:07.418676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:21.907 20:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.907 20:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:21.907 [2024-10-17 20:18:07.421133] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:22.843 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.843 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.843 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:22.843 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:22.843 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.843 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.843 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.843 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.843 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.843 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.843 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:22.843 "name": "raid_bdev1", 00:21:22.843 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:22.843 "strip_size_kb": 0, 00:21:22.843 "state": "online", 00:21:22.843 "raid_level": "raid1", 00:21:22.843 "superblock": true, 00:21:22.843 "num_base_bdevs": 2, 00:21:22.843 "num_base_bdevs_discovered": 2, 00:21:22.843 "num_base_bdevs_operational": 2, 00:21:22.843 "process": { 00:21:22.843 "type": "rebuild", 00:21:22.843 "target": "spare", 00:21:22.843 "progress": { 00:21:22.843 "blocks": 2560, 00:21:22.843 "percent": 32 00:21:22.843 } 00:21:22.843 }, 00:21:22.843 "base_bdevs_list": [ 00:21:22.843 { 00:21:22.843 "name": "spare", 00:21:22.843 "uuid": "2d4b1db7-77b2-5219-b276-9af5fe80338a", 00:21:22.843 "is_configured": true, 00:21:22.843 "data_offset": 256, 00:21:22.843 "data_size": 7936 00:21:22.843 }, 00:21:22.843 { 00:21:22.843 "name": "BaseBdev2", 00:21:22.843 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:22.843 "is_configured": true, 00:21:22.843 "data_offset": 256, 00:21:22.843 "data_size": 7936 00:21:22.843 } 00:21:22.843 ] 00:21:22.843 }' 00:21:22.843 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.102 [2024-10-17 20:18:08.582640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.102 [2024-10-17 20:18:08.629868] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:23.102 [2024-10-17 20:18:08.630130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.102 [2024-10-17 20:18:08.630167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.102 [2024-10-17 20:18:08.630180] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.102 "name": "raid_bdev1", 00:21:23.102 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:23.102 "strip_size_kb": 0, 00:21:23.102 "state": "online", 00:21:23.102 "raid_level": "raid1", 00:21:23.102 "superblock": true, 00:21:23.102 "num_base_bdevs": 2, 00:21:23.102 "num_base_bdevs_discovered": 1, 00:21:23.102 "num_base_bdevs_operational": 1, 00:21:23.102 "base_bdevs_list": [ 00:21:23.102 { 00:21:23.102 "name": null, 00:21:23.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.102 "is_configured": false, 00:21:23.102 "data_offset": 0, 00:21:23.102 "data_size": 7936 00:21:23.102 }, 00:21:23.102 { 00:21:23.102 "name": "BaseBdev2", 00:21:23.102 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:23.102 "is_configured": true, 00:21:23.102 "data_offset": 256, 00:21:23.102 "data_size": 7936 00:21:23.102 } 00:21:23.102 ] 00:21:23.102 }' 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.102 20:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.670 "name": "raid_bdev1", 00:21:23.670 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:23.670 "strip_size_kb": 0, 00:21:23.670 "state": "online", 00:21:23.670 "raid_level": "raid1", 00:21:23.670 "superblock": true, 00:21:23.670 "num_base_bdevs": 2, 00:21:23.670 "num_base_bdevs_discovered": 1, 00:21:23.670 "num_base_bdevs_operational": 1, 00:21:23.670 "base_bdevs_list": [ 00:21:23.670 { 00:21:23.670 "name": null, 00:21:23.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.670 "is_configured": false, 00:21:23.670 "data_offset": 0, 00:21:23.670 "data_size": 7936 00:21:23.670 }, 00:21:23.670 { 00:21:23.670 "name": "BaseBdev2", 00:21:23.670 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:23.670 "is_configured": true, 00:21:23.670 "data_offset": 256, 00:21:23.670 "data_size": 7936 00:21:23.670 } 00:21:23.670 ] 00:21:23.670 }' 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:23.670 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.929 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:23.929 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:23.929 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.929 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.929 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.929 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:23.929 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.929 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.929 [2024-10-17 20:18:09.374137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:23.929 [2024-10-17 20:18:09.374207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.929 [2024-10-17 20:18:09.374242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:23.929 [2024-10-17 20:18:09.374257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.929 [2024-10-17 20:18:09.374459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.929 [2024-10-17 20:18:09.374480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:23.929 [2024-10-17 20:18:09.374548] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:23.929 [2024-10-17 20:18:09.374571] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:23.929 [2024-10-17 20:18:09.374585] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:23.929 [2024-10-17 20:18:09.374598] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:23.929 BaseBdev1 00:21:23.929 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.929 20:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.865 "name": "raid_bdev1", 00:21:24.865 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:24.865 "strip_size_kb": 0, 00:21:24.865 "state": "online", 00:21:24.865 "raid_level": "raid1", 00:21:24.865 "superblock": true, 00:21:24.865 "num_base_bdevs": 2, 00:21:24.865 "num_base_bdevs_discovered": 1, 00:21:24.865 "num_base_bdevs_operational": 1, 00:21:24.865 "base_bdevs_list": [ 00:21:24.865 { 00:21:24.865 "name": null, 00:21:24.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.865 "is_configured": false, 00:21:24.865 "data_offset": 0, 00:21:24.865 "data_size": 7936 00:21:24.865 }, 00:21:24.865 { 00:21:24.865 "name": "BaseBdev2", 00:21:24.865 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:24.865 "is_configured": true, 00:21:24.865 "data_offset": 256, 00:21:24.865 "data_size": 7936 00:21:24.865 } 00:21:24.865 ] 00:21:24.865 }' 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.865 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.434 "name": "raid_bdev1", 00:21:25.434 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:25.434 "strip_size_kb": 0, 00:21:25.434 "state": "online", 00:21:25.434 "raid_level": "raid1", 00:21:25.434 "superblock": true, 00:21:25.434 "num_base_bdevs": 2, 00:21:25.434 "num_base_bdevs_discovered": 1, 00:21:25.434 "num_base_bdevs_operational": 1, 00:21:25.434 "base_bdevs_list": [ 00:21:25.434 { 00:21:25.434 "name": null, 00:21:25.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.434 "is_configured": false, 00:21:25.434 "data_offset": 0, 00:21:25.434 "data_size": 7936 00:21:25.434 }, 00:21:25.434 { 00:21:25.434 "name": "BaseBdev2", 00:21:25.434 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:25.434 "is_configured": true, 00:21:25.434 "data_offset": 256, 00:21:25.434 "data_size": 7936 00:21:25.434 } 00:21:25.434 ] 00:21:25.434 }' 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:25.434 20:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.434 [2024-10-17 20:18:11.042670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.434 [2024-10-17 20:18:11.043027] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:25.434 [2024-10-17 20:18:11.043065] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:25.434 request: 00:21:25.434 { 00:21:25.434 "base_bdev": "BaseBdev1", 00:21:25.434 "raid_bdev": "raid_bdev1", 00:21:25.434 "method": "bdev_raid_add_base_bdev", 00:21:25.434 "req_id": 1 00:21:25.434 } 00:21:25.434 Got JSON-RPC error response 00:21:25.434 response: 00:21:25.434 { 00:21:25.434 "code": -22, 00:21:25.434 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:25.434 } 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:25.434 20:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.811 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.811 "name": "raid_bdev1", 00:21:26.811 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:26.811 "strip_size_kb": 0, 00:21:26.811 "state": "online", 00:21:26.811 "raid_level": "raid1", 00:21:26.811 "superblock": true, 00:21:26.811 "num_base_bdevs": 2, 00:21:26.811 "num_base_bdevs_discovered": 1, 00:21:26.811 "num_base_bdevs_operational": 1, 00:21:26.811 "base_bdevs_list": [ 00:21:26.812 { 00:21:26.812 "name": null, 00:21:26.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.812 "is_configured": false, 00:21:26.812 "data_offset": 0, 00:21:26.812 "data_size": 7936 00:21:26.812 }, 00:21:26.812 { 00:21:26.812 "name": "BaseBdev2", 00:21:26.812 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:26.812 "is_configured": true, 00:21:26.812 "data_offset": 256, 00:21:26.812 "data_size": 7936 00:21:26.812 } 00:21:26.812 ] 00:21:26.812 }' 00:21:26.812 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.812 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.071 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:27.071 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:27.071 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:27.071 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:27.071 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:27.071 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.071 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.071 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.071 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.071 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.071 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:27.071 "name": "raid_bdev1", 00:21:27.071 "uuid": "44679b70-a0a8-4e86-a51f-766ddc37074d", 00:21:27.071 "strip_size_kb": 0, 00:21:27.071 "state": "online", 00:21:27.071 "raid_level": "raid1", 00:21:27.071 "superblock": true, 00:21:27.071 "num_base_bdevs": 2, 00:21:27.071 "num_base_bdevs_discovered": 1, 00:21:27.071 "num_base_bdevs_operational": 1, 00:21:27.071 "base_bdevs_list": [ 00:21:27.071 { 00:21:27.071 "name": null, 00:21:27.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.071 "is_configured": false, 00:21:27.071 "data_offset": 0, 00:21:27.071 "data_size": 7936 00:21:27.071 }, 00:21:27.071 { 00:21:27.071 "name": "BaseBdev2", 00:21:27.071 "uuid": "2bc2f4af-3f52-52ad-92b4-e437ef825707", 00:21:27.071 "is_configured": true, 00:21:27.071 "data_offset": 256, 00:21:27.071 "data_size": 7936 00:21:27.071 } 00:21:27.071 ] 00:21:27.071 }' 00:21:27.071 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:27.340 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:27.340 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:27.340 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:27.340 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89393 00:21:27.340 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89393 ']' 00:21:27.340 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89393 00:21:27.340 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:21:27.340 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:27.340 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89393 00:21:27.340 killing process with pid 89393 00:21:27.341 Received shutdown signal, test time was about 60.000000 seconds 00:21:27.341 00:21:27.341 Latency(us) 00:21:27.341 [2024-10-17T20:18:12.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.341 [2024-10-17T20:18:12.995Z] =================================================================================================================== 00:21:27.341 [2024-10-17T20:18:12.995Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:27.341 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:27.341 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:27.341 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89393' 00:21:27.341 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89393 00:21:27.341 [2024-10-17 20:18:12.824582] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:27.341 20:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89393 00:21:27.341 [2024-10-17 20:18:12.824746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:27.341 [2024-10-17 20:18:12.824811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:27.341 [2024-10-17 20:18:12.824830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:27.626 [2024-10-17 20:18:13.088986] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:28.561 20:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:21:28.561 00:21:28.561 real 0m18.590s 00:21:28.561 user 0m25.404s 00:21:28.561 sys 0m1.402s 00:21:28.561 20:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:28.561 ************************************ 00:21:28.561 END TEST raid_rebuild_test_sb_md_interleaved 00:21:28.561 ************************************ 00:21:28.561 20:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.561 20:18:14 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:21:28.561 20:18:14 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:21:28.561 20:18:14 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89393 ']' 00:21:28.561 20:18:14 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89393 00:21:28.561 20:18:14 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:21:28.561 00:21:28.561 real 12m59.133s 00:21:28.561 user 18m24.609s 00:21:28.561 sys 1m46.317s 00:21:28.561 20:18:14 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:28.561 20:18:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:28.561 ************************************ 00:21:28.561 END TEST bdev_raid 00:21:28.561 ************************************ 00:21:28.561 20:18:14 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:28.561 20:18:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:28.561 20:18:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:28.561 20:18:14 -- common/autotest_common.sh@10 -- # set +x 00:21:28.822 ************************************ 00:21:28.822 START TEST spdkcli_raid 00:21:28.822 ************************************ 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:28.822 * Looking for test storage... 00:21:28.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:28.822 20:18:14 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:28.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.822 --rc genhtml_branch_coverage=1 00:21:28.822 --rc genhtml_function_coverage=1 00:21:28.822 --rc genhtml_legend=1 00:21:28.822 --rc geninfo_all_blocks=1 00:21:28.822 --rc geninfo_unexecuted_blocks=1 00:21:28.822 00:21:28.822 ' 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:28.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.822 --rc genhtml_branch_coverage=1 00:21:28.822 --rc genhtml_function_coverage=1 00:21:28.822 --rc genhtml_legend=1 00:21:28.822 --rc geninfo_all_blocks=1 00:21:28.822 --rc geninfo_unexecuted_blocks=1 00:21:28.822 00:21:28.822 ' 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:28.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.822 --rc genhtml_branch_coverage=1 00:21:28.822 --rc genhtml_function_coverage=1 00:21:28.822 --rc genhtml_legend=1 00:21:28.822 --rc geninfo_all_blocks=1 00:21:28.822 --rc geninfo_unexecuted_blocks=1 00:21:28.822 00:21:28.822 ' 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:28.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.822 --rc genhtml_branch_coverage=1 00:21:28.822 --rc genhtml_function_coverage=1 00:21:28.822 --rc genhtml_legend=1 00:21:28.822 --rc geninfo_all_blocks=1 00:21:28.822 --rc geninfo_unexecuted_blocks=1 00:21:28.822 00:21:28.822 ' 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:28.822 20:18:14 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90092 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90092 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 90092 ']' 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:28.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:28.822 20:18:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:28.822 20:18:14 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:21:29.080 [2024-10-17 20:18:14.537880] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:21:29.080 [2024-10-17 20:18:14.538066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90092 ] 00:21:29.080 [2024-10-17 20:18:14.698404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:29.338 [2024-10-17 20:18:14.834470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.338 [2024-10-17 20:18:14.834480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.273 20:18:15 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:30.273 20:18:15 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:21:30.273 20:18:15 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:21:30.273 20:18:15 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:30.273 20:18:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:30.273 20:18:15 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:21:30.273 20:18:15 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:30.273 20:18:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:30.273 20:18:15 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:30.273 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:30.273 ' 00:21:31.649 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:21:31.649 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:21:31.906 20:18:17 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:21:31.906 20:18:17 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:31.906 20:18:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:31.906 20:18:17 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:21:31.906 20:18:17 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:31.907 20:18:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:31.907 20:18:17 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:21:31.907 ' 00:21:32.841 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:21:33.105 20:18:18 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:21:33.105 20:18:18 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:33.105 20:18:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:33.105 20:18:18 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:21:33.105 20:18:18 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:33.105 20:18:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:33.105 20:18:18 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:21:33.105 20:18:18 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:21:33.671 20:18:19 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:21:33.671 20:18:19 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:21:33.671 20:18:19 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:21:33.671 20:18:19 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:33.671 20:18:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:33.671 20:18:19 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:21:33.671 20:18:19 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:33.671 20:18:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:33.671 20:18:19 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:21:33.671 ' 00:21:35.046 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:21:35.046 20:18:20 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:21:35.046 20:18:20 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:35.046 20:18:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.046 20:18:20 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:21:35.046 20:18:20 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:35.046 20:18:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.046 20:18:20 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:21:35.046 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:21:35.046 ' 00:21:36.422 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:21:36.422 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:21:36.422 20:18:21 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:21:36.422 20:18:21 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:36.422 20:18:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:36.422 20:18:22 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90092 00:21:36.422 20:18:22 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90092 ']' 00:21:36.422 20:18:22 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90092 00:21:36.422 20:18:22 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:21:36.422 20:18:22 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:36.422 20:18:22 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90092 00:21:36.422 20:18:22 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:36.422 20:18:22 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:36.422 killing process with pid 90092 00:21:36.422 20:18:22 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90092' 00:21:36.422 20:18:22 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 90092 00:21:36.422 20:18:22 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 90092 00:21:38.954 20:18:24 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:21:38.954 20:18:24 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90092 ']' 00:21:38.954 20:18:24 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90092 00:21:38.954 20:18:24 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90092 ']' 00:21:38.954 20:18:24 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90092 00:21:38.954 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90092) - No such process 00:21:38.954 Process with pid 90092 is not found 00:21:38.954 20:18:24 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 90092 is not found' 00:21:38.954 20:18:24 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:21:38.954 20:18:24 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:38.954 20:18:24 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:38.954 20:18:24 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:38.954 00:21:38.954 real 0m10.032s 00:21:38.954 user 0m20.795s 00:21:38.954 sys 0m1.046s 00:21:38.954 20:18:24 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:38.954 20:18:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:38.954 ************************************ 00:21:38.954 END TEST spdkcli_raid 00:21:38.954 ************************************ 00:21:38.954 20:18:24 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:38.954 20:18:24 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:38.954 20:18:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:38.954 20:18:24 -- common/autotest_common.sh@10 -- # set +x 00:21:38.954 ************************************ 00:21:38.954 START TEST blockdev_raid5f 00:21:38.954 ************************************ 00:21:38.954 20:18:24 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:38.954 * Looking for test storage... 00:21:38.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:38.954 20:18:24 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:38.954 20:18:24 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:21:38.954 20:18:24 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:38.954 20:18:24 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.954 20:18:24 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:21:38.955 20:18:24 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.955 20:18:24 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:21:38.955 20:18:24 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:21:38.955 20:18:24 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.955 20:18:24 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:21:38.955 20:18:24 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.955 20:18:24 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.955 20:18:24 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.955 20:18:24 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:21:38.955 20:18:24 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.955 20:18:24 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:38.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.955 --rc genhtml_branch_coverage=1 00:21:38.955 --rc genhtml_function_coverage=1 00:21:38.955 --rc genhtml_legend=1 00:21:38.955 --rc geninfo_all_blocks=1 00:21:38.955 --rc geninfo_unexecuted_blocks=1 00:21:38.955 00:21:38.955 ' 00:21:38.955 20:18:24 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:38.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.955 --rc genhtml_branch_coverage=1 00:21:38.955 --rc genhtml_function_coverage=1 00:21:38.955 --rc genhtml_legend=1 00:21:38.955 --rc geninfo_all_blocks=1 00:21:38.955 --rc geninfo_unexecuted_blocks=1 00:21:38.955 00:21:38.955 ' 00:21:38.955 20:18:24 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:38.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.955 --rc genhtml_branch_coverage=1 00:21:38.955 --rc genhtml_function_coverage=1 00:21:38.955 --rc genhtml_legend=1 00:21:38.955 --rc geninfo_all_blocks=1 00:21:38.955 --rc geninfo_unexecuted_blocks=1 00:21:38.955 00:21:38.955 ' 00:21:38.955 20:18:24 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:38.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.955 --rc genhtml_branch_coverage=1 00:21:38.955 --rc genhtml_function_coverage=1 00:21:38.955 --rc genhtml_legend=1 00:21:38.955 --rc geninfo_all_blocks=1 00:21:38.955 --rc geninfo_unexecuted_blocks=1 00:21:38.955 00:21:38.955 ' 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90367 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90367 00:21:38.955 20:18:24 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 90367 ']' 00:21:38.955 20:18:24 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.955 20:18:24 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:38.955 20:18:24 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.955 20:18:24 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:38.955 20:18:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:38.955 20:18:24 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:38.955 [2024-10-17 20:18:24.604247] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:21:38.955 [2024-10-17 20:18:24.604466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90367 ] 00:21:39.214 [2024-10-17 20:18:24.768792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.473 [2024-10-17 20:18:24.900983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:40.474 Malloc0 00:21:40.474 Malloc1 00:21:40.474 Malloc2 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:40.474 20:18:25 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:21:40.474 20:18:25 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "9165c101-94df-4434-8e69-f5e8008ff92d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9165c101-94df-4434-8e69-f5e8008ff92d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "9165c101-94df-4434-8e69-f5e8008ff92d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "2ad83666-8bca-49ba-8a67-0db38d700603",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7bfecd4c-62a9-4b39-80ca-c08c524c604a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e9dd1cab-6207-4d21-bd0c-97c1d66dfaf6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:40.474 20:18:26 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:21:40.474 20:18:26 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:21:40.474 20:18:26 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:21:40.474 20:18:26 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90367 00:21:40.474 20:18:26 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 90367 ']' 00:21:40.474 20:18:26 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 90367 00:21:40.474 20:18:26 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:21:40.474 20:18:26 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.474 20:18:26 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90367 00:21:40.474 20:18:26 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:40.474 20:18:26 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:40.474 killing process with pid 90367 00:21:40.474 20:18:26 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90367' 00:21:40.474 20:18:26 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 90367 00:21:40.474 20:18:26 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 90367 00:21:43.011 20:18:28 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:43.011 20:18:28 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:43.011 20:18:28 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:43.011 20:18:28 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:43.011 20:18:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:43.011 ************************************ 00:21:43.011 START TEST bdev_hello_world 00:21:43.011 ************************************ 00:21:43.011 20:18:28 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:43.011 [2024-10-17 20:18:28.566983] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:21:43.011 [2024-10-17 20:18:28.567192] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90443 ] 00:21:43.270 [2024-10-17 20:18:28.727870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.270 [2024-10-17 20:18:28.854614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.837 [2024-10-17 20:18:29.385215] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:43.837 [2024-10-17 20:18:29.385293] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:21:43.837 [2024-10-17 20:18:29.385317] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:43.837 [2024-10-17 20:18:29.385898] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:43.837 [2024-10-17 20:18:29.386120] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:43.837 [2024-10-17 20:18:29.386152] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:43.837 [2024-10-17 20:18:29.386227] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:43.837 00:21:43.837 [2024-10-17 20:18:29.386257] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:45.213 00:21:45.213 real 0m2.181s 00:21:45.213 user 0m1.784s 00:21:45.213 sys 0m0.272s 00:21:45.213 20:18:30 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:45.213 20:18:30 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:45.213 ************************************ 00:21:45.213 END TEST bdev_hello_world 00:21:45.213 ************************************ 00:21:45.213 20:18:30 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:21:45.213 20:18:30 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:45.213 20:18:30 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:45.213 20:18:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:45.213 ************************************ 00:21:45.213 START TEST bdev_bounds 00:21:45.213 ************************************ 00:21:45.213 20:18:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:21:45.213 20:18:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90487 00:21:45.213 20:18:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:45.213 Process bdevio pid: 90487 00:21:45.213 20:18:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90487' 00:21:45.213 20:18:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:45.213 20:18:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90487 00:21:45.213 20:18:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 90487 ']' 00:21:45.213 20:18:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.213 20:18:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.213 20:18:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.213 20:18:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.213 20:18:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:45.213 [2024-10-17 20:18:30.819084] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:21:45.213 [2024-10-17 20:18:30.819923] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90487 ] 00:21:45.472 [2024-10-17 20:18:30.990245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:45.731 [2024-10-17 20:18:31.127326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.731 [2024-10-17 20:18:31.127436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.731 [2024-10-17 20:18:31.127442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.298 20:18:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:46.299 20:18:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:21:46.299 20:18:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:46.299 I/O targets: 00:21:46.299 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:21:46.299 00:21:46.299 00:21:46.299 CUnit - A unit testing framework for C - Version 2.1-3 00:21:46.299 http://cunit.sourceforge.net/ 00:21:46.299 00:21:46.299 00:21:46.299 Suite: bdevio tests on: raid5f 00:21:46.299 Test: blockdev write read block ...passed 00:21:46.299 Test: blockdev write zeroes read block ...passed 00:21:46.299 Test: blockdev write zeroes read no split ...passed 00:21:46.557 Test: blockdev write zeroes read split ...passed 00:21:46.557 Test: blockdev write zeroes read split partial ...passed 00:21:46.557 Test: blockdev reset ...passed 00:21:46.557 Test: blockdev write read 8 blocks ...passed 00:21:46.557 Test: blockdev write read size > 128k ...passed 00:21:46.557 Test: blockdev write read invalid size ...passed 00:21:46.557 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:46.557 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:46.557 Test: blockdev write read max offset ...passed 00:21:46.557 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:46.557 Test: blockdev writev readv 8 blocks ...passed 00:21:46.557 Test: blockdev writev readv 30 x 1block ...passed 00:21:46.557 Test: blockdev writev readv block ...passed 00:21:46.558 Test: blockdev writev readv size > 128k ...passed 00:21:46.558 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:46.558 Test: blockdev comparev and writev ...passed 00:21:46.558 Test: blockdev nvme passthru rw ...passed 00:21:46.558 Test: blockdev nvme passthru vendor specific ...passed 00:21:46.558 Test: blockdev nvme admin passthru ...passed 00:21:46.558 Test: blockdev copy ...passed 00:21:46.558 00:21:46.558 Run Summary: Type Total Ran Passed Failed Inactive 00:21:46.558 suites 1 1 n/a 0 0 00:21:46.558 tests 23 23 23 0 0 00:21:46.558 asserts 130 130 130 0 n/a 00:21:46.558 00:21:46.558 Elapsed time = 0.586 seconds 00:21:46.558 0 00:21:46.558 20:18:32 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90487 00:21:46.558 20:18:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 90487 ']' 00:21:46.558 20:18:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 90487 00:21:46.558 20:18:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:21:46.558 20:18:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:46.558 20:18:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90487 00:21:46.558 20:18:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:46.558 20:18:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:46.558 killing process with pid 90487 00:21:46.558 20:18:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90487' 00:21:46.558 20:18:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 90487 00:21:46.558 20:18:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 90487 00:21:47.935 20:18:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:47.935 00:21:47.935 real 0m2.751s 00:21:47.935 user 0m6.769s 00:21:47.935 sys 0m0.450s 00:21:47.935 20:18:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:47.935 20:18:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:47.935 ************************************ 00:21:47.935 END TEST bdev_bounds 00:21:47.935 ************************************ 00:21:47.935 20:18:33 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:47.935 20:18:33 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:47.935 20:18:33 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:47.935 20:18:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:47.935 ************************************ 00:21:47.935 START TEST bdev_nbd 00:21:47.935 ************************************ 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90554 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90554 /var/tmp/spdk-nbd.sock 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 90554 ']' 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.935 20:18:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:48.206 [2024-10-17 20:18:33.611195] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:21:48.206 [2024-10-17 20:18:33.611358] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.206 [2024-10-17 20:18:33.777137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.477 [2024-10-17 20:18:33.907726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:49.045 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:49.304 1+0 records in 00:21:49.304 1+0 records out 00:21:49.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257169 s, 15.9 MB/s 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:49.304 20:18:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:49.563 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:49.563 { 00:21:49.563 "nbd_device": "/dev/nbd0", 00:21:49.563 "bdev_name": "raid5f" 00:21:49.563 } 00:21:49.563 ]' 00:21:49.563 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:49.563 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:49.563 { 00:21:49.563 "nbd_device": "/dev/nbd0", 00:21:49.563 "bdev_name": "raid5f" 00:21:49.563 } 00:21:49.563 ]' 00:21:49.563 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:49.821 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:49.822 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:49.822 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:49.822 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:49.822 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:49.822 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:49.822 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:50.081 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:50.081 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:50.081 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:50.081 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:50.081 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:50.081 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:50.081 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:50.081 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:50.081 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:50.081 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:50.081 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:50.340 20:18:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:21:50.599 /dev/nbd0 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:50.599 1+0 records in 00:21:50.599 1+0 records out 00:21:50.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216502 s, 18.9 MB/s 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:50.599 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:50.858 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:50.858 { 00:21:50.858 "nbd_device": "/dev/nbd0", 00:21:50.858 "bdev_name": "raid5f" 00:21:50.858 } 00:21:50.858 ]' 00:21:50.858 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:50.858 { 00:21:50.858 "nbd_device": "/dev/nbd0", 00:21:50.858 "bdev_name": "raid5f" 00:21:50.858 } 00:21:50.858 ]' 00:21:50.858 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:51.118 256+0 records in 00:21:51.118 256+0 records out 00:21:51.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00788843 s, 133 MB/s 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:51.118 256+0 records in 00:21:51.118 256+0 records out 00:21:51.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0379888 s, 27.6 MB/s 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:51.118 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:51.378 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:51.378 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:51.378 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:51.378 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:51.378 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:51.378 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:51.378 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:51.378 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:51.378 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:51.378 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:51.378 20:18:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:51.637 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:51.895 malloc_lvol_verify 00:21:51.895 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:52.153 5d56ead5-c452-4f73-97e8-7dcae73a1dc5 00:21:52.153 20:18:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:52.412 7cbae339-923e-44b4-9d3d-044fb4c0a166 00:21:52.671 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:52.671 /dev/nbd0 00:21:52.671 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:52.671 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:52.671 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:52.671 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:52.671 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:52.929 mke2fs 1.47.0 (5-Feb-2023) 00:21:52.929 Discarding device blocks: 0/4096 done 00:21:52.929 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:52.929 00:21:52.929 Allocating group tables: 0/1 done 00:21:52.929 Writing inode tables: 0/1 done 00:21:52.929 Creating journal (1024 blocks): done 00:21:52.929 Writing superblocks and filesystem accounting information: 0/1 done 00:21:52.929 00:21:52.929 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:52.929 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:52.929 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:52.929 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:52.929 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:52.929 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:52.929 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90554 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 90554 ']' 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 90554 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90554 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:53.187 killing process with pid 90554 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90554' 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 90554 00:21:53.187 20:18:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 90554 00:21:54.564 20:18:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:54.564 00:21:54.564 real 0m6.411s 00:21:54.564 user 0m9.320s 00:21:54.564 sys 0m1.338s 00:21:54.564 ************************************ 00:21:54.564 END TEST bdev_nbd 00:21:54.564 ************************************ 00:21:54.564 20:18:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:54.564 20:18:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:54.564 20:18:39 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:21:54.564 20:18:39 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:21:54.564 20:18:39 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:21:54.564 20:18:39 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:21:54.564 20:18:39 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:54.564 20:18:39 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.564 20:18:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:54.564 ************************************ 00:21:54.564 START TEST bdev_fio 00:21:54.564 ************************************ 00:21:54.564 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:21:54.564 20:18:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:54.564 20:18:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:54.564 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:54.564 20:18:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:54.564 20:18:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:54.564 20:18:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:54.564 20:18:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:54.564 20:18:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:54.564 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:54.564 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:21:54.565 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:21:54.565 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:21:54.565 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:21:54.565 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:54.565 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:21:54.565 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:21:54.565 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:54.565 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:21:54.565 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:21:54.565 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:21:54.565 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:21:54.565 20:18:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:54.565 ************************************ 00:21:54.565 START TEST bdev_fio_rw_verify 00:21:54.565 ************************************ 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:54.565 20:18:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:54.824 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:54.824 fio-3.35 00:21:54.824 Starting 1 thread 00:22:07.027 00:22:07.027 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90755: Thu Oct 17 20:18:51 2024 00:22:07.027 read: IOPS=8835, BW=34.5MiB/s (36.2MB/s)(345MiB/10001msec) 00:22:07.027 slat (usec): min=21, max=170, avg=27.64, stdev= 5.25 00:22:07.027 clat (usec): min=13, max=734, avg=179.96, stdev=67.71 00:22:07.027 lat (usec): min=40, max=790, avg=207.60, stdev=68.74 00:22:07.027 clat percentiles (usec): 00:22:07.027 | 50.000th=[ 180], 99.000th=[ 314], 99.900th=[ 416], 99.990th=[ 676], 00:22:07.027 | 99.999th=[ 734] 00:22:07.027 write: IOPS=9248, BW=36.1MiB/s (37.9MB/s)(357MiB/9875msec); 0 zone resets 00:22:07.027 slat (usec): min=11, max=141, avg=22.78, stdev= 5.97 00:22:07.027 clat (usec): min=82, max=1484, avg=414.75, stdev=57.69 00:22:07.027 lat (usec): min=104, max=1508, avg=437.53, stdev=59.46 00:22:07.027 clat percentiles (usec): 00:22:07.027 | 50.000th=[ 420], 99.000th=[ 553], 99.900th=[ 709], 99.990th=[ 832], 00:22:07.027 | 99.999th=[ 1483] 00:22:07.027 bw ( KiB/s): min=33784, max=39544, per=99.13%, avg=36674.32, stdev=1799.00, samples=19 00:22:07.027 iops : min= 8446, max= 9886, avg=9168.58, stdev=449.75, samples=19 00:22:07.027 lat (usec) : 20=0.01%, 100=6.53%, 250=33.67%, 500=57.54%, 750=2.25% 00:22:07.027 lat (usec) : 1000=0.01% 00:22:07.027 lat (msec) : 2=0.01% 00:22:07.027 cpu : usr=98.68%, sys=0.32%, ctx=20, majf=0, minf=7608 00:22:07.027 IO depths : 1=7.7%, 2=19.8%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:07.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.027 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.027 issued rwts: total=88362,91332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.027 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:07.027 00:22:07.027 Run status group 0 (all jobs): 00:22:07.027 READ: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=345MiB (362MB), run=10001-10001msec 00:22:07.027 WRITE: bw=36.1MiB/s (37.9MB/s), 36.1MiB/s-36.1MiB/s (37.9MB/s-37.9MB/s), io=357MiB (374MB), run=9875-9875msec 00:22:07.027 ----------------------------------------------------- 00:22:07.027 Suppressions used: 00:22:07.027 count bytes template 00:22:07.027 1 7 /usr/src/fio/parse.c 00:22:07.027 234 22464 /usr/src/fio/iolog.c 00:22:07.027 1 8 libtcmalloc_minimal.so 00:22:07.027 1 904 libcrypto.so 00:22:07.027 ----------------------------------------------------- 00:22:07.027 00:22:07.284 00:22:07.284 real 0m12.615s 00:22:07.284 user 0m12.937s 00:22:07.284 sys 0m0.799s 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:22:07.284 ************************************ 00:22:07.284 END TEST bdev_fio_rw_verify 00:22:07.284 ************************************ 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:07.284 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:22:07.285 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:22:07.285 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:22:07.285 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:22:07.285 20:18:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "9165c101-94df-4434-8e69-f5e8008ff92d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9165c101-94df-4434-8e69-f5e8008ff92d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "9165c101-94df-4434-8e69-f5e8008ff92d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "2ad83666-8bca-49ba-8a67-0db38d700603",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7bfecd4c-62a9-4b39-80ca-c08c524c604a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e9dd1cab-6207-4d21-bd0c-97c1d66dfaf6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:07.285 20:18:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:07.285 20:18:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:22:07.285 20:18:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:07.285 20:18:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:22:07.285 /home/vagrant/spdk_repo/spdk 00:22:07.285 20:18:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:22:07.285 20:18:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:22:07.285 00:22:07.285 real 0m12.843s 00:22:07.285 user 0m13.048s 00:22:07.285 sys 0m0.887s 00:22:07.285 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:07.285 20:18:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:07.285 ************************************ 00:22:07.285 END TEST bdev_fio 00:22:07.285 ************************************ 00:22:07.285 20:18:52 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:07.285 20:18:52 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:07.285 20:18:52 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:22:07.285 20:18:52 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:07.285 20:18:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:07.285 ************************************ 00:22:07.285 START TEST bdev_verify 00:22:07.285 ************************************ 00:22:07.285 20:18:52 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:07.543 [2024-10-17 20:18:52.984312] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:22:07.543 [2024-10-17 20:18:52.984496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90919 ] 00:22:07.543 [2024-10-17 20:18:53.161708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:07.801 [2024-10-17 20:18:53.315498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.801 [2024-10-17 20:18:53.315510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.368 Running I/O for 5 seconds... 00:22:10.260 12313.00 IOPS, 48.10 MiB/s [2024-10-17T20:18:57.289Z] 13328.00 IOPS, 52.06 MiB/s [2024-10-17T20:18:58.228Z] 13295.00 IOPS, 51.93 MiB/s [2024-10-17T20:18:59.162Z] 13302.75 IOPS, 51.96 MiB/s [2024-10-17T20:18:59.162Z] 13333.20 IOPS, 52.08 MiB/s 00:22:13.508 Latency(us) 00:22:13.508 [2024-10-17T20:18:59.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.508 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:13.508 Verification LBA range: start 0x0 length 0x2000 00:22:13.508 raid5f : 5.02 6626.10 25.88 0.00 0.00 29103.55 249.48 24427.05 00:22:13.508 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:13.508 Verification LBA range: start 0x2000 length 0x2000 00:22:13.508 raid5f : 5.02 6707.85 26.20 0.00 0.00 28674.89 121.02 24069.59 00:22:13.508 [2024-10-17T20:18:59.162Z] =================================================================================================================== 00:22:13.508 [2024-10-17T20:18:59.162Z] Total : 13333.95 52.09 0.00 0.00 28887.80 121.02 24427.05 00:22:14.883 00:22:14.883 real 0m7.249s 00:22:14.883 user 0m13.300s 00:22:14.883 sys 0m0.309s 00:22:14.883 20:19:00 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:14.883 20:19:00 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:14.883 ************************************ 00:22:14.883 END TEST bdev_verify 00:22:14.883 ************************************ 00:22:14.884 20:19:00 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:14.884 20:19:00 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:22:14.884 20:19:00 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:14.884 20:19:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:14.884 ************************************ 00:22:14.884 START TEST bdev_verify_big_io 00:22:14.884 ************************************ 00:22:14.884 20:19:00 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:14.884 [2024-10-17 20:19:00.261107] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:22:14.884 [2024-10-17 20:19:00.261282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91012 ] 00:22:14.884 [2024-10-17 20:19:00.420380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:15.142 [2024-10-17 20:19:00.543916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.142 [2024-10-17 20:19:00.543917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.709 Running I/O for 5 seconds... 00:22:17.648 693.00 IOPS, 43.31 MiB/s [2024-10-17T20:19:04.236Z] 760.00 IOPS, 47.50 MiB/s [2024-10-17T20:19:05.610Z] 761.33 IOPS, 47.58 MiB/s [2024-10-17T20:19:06.544Z] 761.50 IOPS, 47.59 MiB/s [2024-10-17T20:19:06.544Z] 774.00 IOPS, 48.38 MiB/s 00:22:20.890 Latency(us) 00:22:20.890 [2024-10-17T20:19:06.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.890 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:20.890 Verification LBA range: start 0x0 length 0x200 00:22:20.890 raid5f : 5.13 396.18 24.76 0.00 0.00 8029084.29 193.63 327918.31 00:22:20.890 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:20.890 Verification LBA range: start 0x200 length 0x200 00:22:20.890 raid5f : 5.21 402.11 25.13 0.00 0.00 7834168.81 176.87 322198.81 00:22:20.890 [2024-10-17T20:19:06.544Z] =================================================================================================================== 00:22:20.890 [2024-10-17T20:19:06.544Z] Total : 798.28 49.89 0.00 0.00 7930162.08 176.87 327918.31 00:22:22.262 00:22:22.262 real 0m7.395s 00:22:22.262 user 0m13.662s 00:22:22.262 sys 0m0.296s 00:22:22.262 20:19:07 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:22.262 20:19:07 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:22:22.262 ************************************ 00:22:22.262 END TEST bdev_verify_big_io 00:22:22.262 ************************************ 00:22:22.262 20:19:07 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:22.262 20:19:07 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:22:22.262 20:19:07 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:22.262 20:19:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:22.262 ************************************ 00:22:22.262 START TEST bdev_write_zeroes 00:22:22.262 ************************************ 00:22:22.262 20:19:07 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:22.262 [2024-10-17 20:19:07.714120] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:22:22.262 [2024-10-17 20:19:07.714291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91105 ] 00:22:22.262 [2024-10-17 20:19:07.880046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.526 [2024-10-17 20:19:08.011497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.126 Running I/O for 1 seconds... 00:22:24.061 19911.00 IOPS, 77.78 MiB/s 00:22:24.061 Latency(us) 00:22:24.061 [2024-10-17T20:19:09.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.061 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:24.061 raid5f : 1.01 19886.43 77.68 0.00 0.00 6410.81 1936.29 8638.84 00:22:24.061 [2024-10-17T20:19:09.715Z] =================================================================================================================== 00:22:24.061 [2024-10-17T20:19:09.715Z] Total : 19886.43 77.68 0.00 0.00 6410.81 1936.29 8638.84 00:22:25.435 00:22:25.435 real 0m3.201s 00:22:25.435 user 0m2.780s 00:22:25.435 sys 0m0.288s 00:22:25.435 20:19:10 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.435 20:19:10 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:22:25.435 ************************************ 00:22:25.435 END TEST bdev_write_zeroes 00:22:25.435 ************************************ 00:22:25.435 20:19:10 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:25.435 20:19:10 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:22:25.435 20:19:10 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.435 20:19:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:25.435 ************************************ 00:22:25.435 START TEST bdev_json_nonenclosed 00:22:25.435 ************************************ 00:22:25.435 20:19:10 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:25.435 [2024-10-17 20:19:10.992679] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:22:25.435 [2024-10-17 20:19:10.992848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91158 ] 00:22:25.693 [2024-10-17 20:19:11.167967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.693 [2024-10-17 20:19:11.298904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.693 [2024-10-17 20:19:11.299064] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:22:25.693 [2024-10-17 20:19:11.299106] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:25.693 [2024-10-17 20:19:11.299121] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:25.951 00:22:25.951 real 0m0.679s 00:22:25.951 user 0m0.432s 00:22:25.951 sys 0m0.141s 00:22:25.951 20:19:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.951 20:19:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:22:25.951 ************************************ 00:22:25.951 END TEST bdev_json_nonenclosed 00:22:25.951 ************************************ 00:22:26.210 20:19:11 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:26.210 20:19:11 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:22:26.210 20:19:11 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:26.210 20:19:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:26.210 ************************************ 00:22:26.210 START TEST bdev_json_nonarray 00:22:26.210 ************************************ 00:22:26.210 20:19:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:26.210 [2024-10-17 20:19:11.702242] Starting SPDK v25.01-pre git sha1 5c4ed23c8 / DPDK 24.03.0 initialization... 00:22:26.210 [2024-10-17 20:19:11.702409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91184 ] 00:22:26.468 [2024-10-17 20:19:11.862191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.468 [2024-10-17 20:19:11.992676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.468 [2024-10-17 20:19:11.992808] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:22:26.468 [2024-10-17 20:19:11.992837] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:26.468 [2024-10-17 20:19:11.992863] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:26.726 00:22:26.726 real 0m0.635s 00:22:26.726 user 0m0.403s 00:22:26.726 sys 0m0.128s 00:22:26.726 20:19:12 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:26.726 20:19:12 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:26.726 ************************************ 00:22:26.726 END TEST bdev_json_nonarray 00:22:26.726 ************************************ 00:22:26.726 20:19:12 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:22:26.726 20:19:12 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:22:26.726 20:19:12 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:22:26.726 20:19:12 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:22:26.726 20:19:12 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:22:26.726 20:19:12 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:26.727 20:19:12 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:26.727 20:19:12 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:22:26.727 20:19:12 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:22:26.727 20:19:12 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:22:26.727 20:19:12 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:22:26.727 00:22:26.727 real 0m47.997s 00:22:26.727 user 1m5.882s 00:22:26.727 sys 0m5.066s 00:22:26.727 20:19:12 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:26.727 20:19:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:26.727 ************************************ 00:22:26.727 END TEST blockdev_raid5f 00:22:26.727 ************************************ 00:22:26.727 20:19:12 -- spdk/autotest.sh@194 -- # uname -s 00:22:26.727 20:19:12 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:22:26.727 20:19:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:26.727 20:19:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:26.727 20:19:12 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:22:26.727 20:19:12 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:22:26.727 20:19:12 -- spdk/autotest.sh@256 -- # timing_exit lib 00:22:26.727 20:19:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:26.727 20:19:12 -- common/autotest_common.sh@10 -- # set +x 00:22:26.985 20:19:12 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:26.985 20:19:12 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:22:26.985 20:19:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:26.985 20:19:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:26.985 20:19:12 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:22:26.985 20:19:12 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:22:26.985 20:19:12 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:22:26.985 20:19:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.985 20:19:12 -- common/autotest_common.sh@10 -- # set +x 00:22:26.985 20:19:12 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:22:26.985 20:19:12 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:26.985 20:19:12 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:26.985 20:19:12 -- common/autotest_common.sh@10 -- # set +x 00:22:28.884 INFO: APP EXITING 00:22:28.884 INFO: killing all VMs 00:22:28.884 INFO: killing vhost app 00:22:28.884 INFO: EXIT DONE 00:22:28.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:28.884 Waiting for block devices as requested 00:22:28.884 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:29.159 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:29.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:29.770 Cleaning 00:22:29.770 Removing: /var/run/dpdk/spdk0/config 00:22:29.770 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:29.770 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:29.770 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:29.770 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:30.028 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:30.028 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:30.028 Removing: /dev/shm/spdk_tgt_trace.pid56781 00:22:30.028 Removing: /var/run/dpdk/spdk0 00:22:30.028 Removing: /var/run/dpdk/spdk_pid56535 00:22:30.028 Removing: /var/run/dpdk/spdk_pid56781 00:22:30.028 Removing: /var/run/dpdk/spdk_pid57019 00:22:30.028 Removing: /var/run/dpdk/spdk_pid57125 00:22:30.028 Removing: /var/run/dpdk/spdk_pid57180 00:22:30.028 Removing: /var/run/dpdk/spdk_pid57315 00:22:30.028 Removing: /var/run/dpdk/spdk_pid57333 00:22:30.029 Removing: /var/run/dpdk/spdk_pid57543 00:22:30.029 Removing: /var/run/dpdk/spdk_pid57649 00:22:30.029 Removing: /var/run/dpdk/spdk_pid57756 00:22:30.029 Removing: /var/run/dpdk/spdk_pid57884 00:22:30.029 Removing: /var/run/dpdk/spdk_pid57992 00:22:30.029 Removing: /var/run/dpdk/spdk_pid58031 00:22:30.029 Removing: /var/run/dpdk/spdk_pid58068 00:22:30.029 Removing: /var/run/dpdk/spdk_pid58144 00:22:30.029 Removing: /var/run/dpdk/spdk_pid58250 00:22:30.029 Removing: /var/run/dpdk/spdk_pid58719 00:22:30.029 Removing: /var/run/dpdk/spdk_pid58794 00:22:30.029 Removing: /var/run/dpdk/spdk_pid58868 00:22:30.029 Removing: /var/run/dpdk/spdk_pid58884 00:22:30.029 Removing: /var/run/dpdk/spdk_pid59032 00:22:30.029 Removing: /var/run/dpdk/spdk_pid59053 00:22:30.029 Removing: /var/run/dpdk/spdk_pid59198 00:22:30.029 Removing: /var/run/dpdk/spdk_pid59219 00:22:30.029 Removing: /var/run/dpdk/spdk_pid59288 00:22:30.029 Removing: /var/run/dpdk/spdk_pid59307 00:22:30.029 Removing: /var/run/dpdk/spdk_pid59371 00:22:30.029 Removing: /var/run/dpdk/spdk_pid59389 00:22:30.029 Removing: /var/run/dpdk/spdk_pid59584 00:22:30.029 Removing: /var/run/dpdk/spdk_pid59626 00:22:30.029 Removing: /var/run/dpdk/spdk_pid59704 00:22:30.029 Removing: /var/run/dpdk/spdk_pid61064 00:22:30.029 Removing: /var/run/dpdk/spdk_pid61275 00:22:30.029 Removing: /var/run/dpdk/spdk_pid61421 00:22:30.029 Removing: /var/run/dpdk/spdk_pid62075 00:22:30.029 Removing: /var/run/dpdk/spdk_pid62287 00:22:30.029 Removing: /var/run/dpdk/spdk_pid62427 00:22:30.029 Removing: /var/run/dpdk/spdk_pid63081 00:22:30.029 Removing: /var/run/dpdk/spdk_pid63417 00:22:30.029 Removing: /var/run/dpdk/spdk_pid63557 00:22:30.029 Removing: /var/run/dpdk/spdk_pid64981 00:22:30.029 Removing: /var/run/dpdk/spdk_pid65245 00:22:30.029 Removing: /var/run/dpdk/spdk_pid65385 00:22:30.029 Removing: /var/run/dpdk/spdk_pid66798 00:22:30.029 Removing: /var/run/dpdk/spdk_pid67062 00:22:30.029 Removing: /var/run/dpdk/spdk_pid67202 00:22:30.029 Removing: /var/run/dpdk/spdk_pid68615 00:22:30.029 Removing: /var/run/dpdk/spdk_pid69072 00:22:30.029 Removing: /var/run/dpdk/spdk_pid69222 00:22:30.029 Removing: /var/run/dpdk/spdk_pid70736 00:22:30.029 Removing: /var/run/dpdk/spdk_pid71001 00:22:30.029 Removing: /var/run/dpdk/spdk_pid71155 00:22:30.029 Removing: /var/run/dpdk/spdk_pid72670 00:22:30.029 Removing: /var/run/dpdk/spdk_pid72935 00:22:30.029 Removing: /var/run/dpdk/spdk_pid73085 00:22:30.029 Removing: /var/run/dpdk/spdk_pid74594 00:22:30.029 Removing: /var/run/dpdk/spdk_pid75087 00:22:30.029 Removing: /var/run/dpdk/spdk_pid75237 00:22:30.029 Removing: /var/run/dpdk/spdk_pid75382 00:22:30.029 Removing: /var/run/dpdk/spdk_pid75823 00:22:30.029 Removing: /var/run/dpdk/spdk_pid76593 00:22:30.029 Removing: /var/run/dpdk/spdk_pid76975 00:22:30.029 Removing: /var/run/dpdk/spdk_pid77696 00:22:30.029 Removing: /var/run/dpdk/spdk_pid78170 00:22:30.029 Removing: /var/run/dpdk/spdk_pid78963 00:22:30.029 Removing: /var/run/dpdk/spdk_pid79383 00:22:30.029 Removing: /var/run/dpdk/spdk_pid81386 00:22:30.029 Removing: /var/run/dpdk/spdk_pid81834 00:22:30.029 Removing: /var/run/dpdk/spdk_pid82290 00:22:30.029 Removing: /var/run/dpdk/spdk_pid84418 00:22:30.029 Removing: /var/run/dpdk/spdk_pid84905 00:22:30.029 Removing: /var/run/dpdk/spdk_pid85414 00:22:30.029 Removing: /var/run/dpdk/spdk_pid86489 00:22:30.029 Removing: /var/run/dpdk/spdk_pid86818 00:22:30.029 Removing: /var/run/dpdk/spdk_pid87773 00:22:30.029 Removing: /var/run/dpdk/spdk_pid88102 00:22:30.029 Removing: /var/run/dpdk/spdk_pid89062 00:22:30.029 Removing: /var/run/dpdk/spdk_pid89393 00:22:30.029 Removing: /var/run/dpdk/spdk_pid90092 00:22:30.029 Removing: /var/run/dpdk/spdk_pid90367 00:22:30.029 Removing: /var/run/dpdk/spdk_pid90443 00:22:30.029 Removing: /var/run/dpdk/spdk_pid90487 00:22:30.288 Removing: /var/run/dpdk/spdk_pid90744 00:22:30.288 Removing: /var/run/dpdk/spdk_pid90919 00:22:30.288 Removing: /var/run/dpdk/spdk_pid91012 00:22:30.288 Removing: /var/run/dpdk/spdk_pid91105 00:22:30.288 Removing: /var/run/dpdk/spdk_pid91158 00:22:30.288 Removing: /var/run/dpdk/spdk_pid91184 00:22:30.288 Clean 00:22:30.288 20:19:15 -- common/autotest_common.sh@1451 -- # return 0 00:22:30.288 20:19:15 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:22:30.288 20:19:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:30.288 20:19:15 -- common/autotest_common.sh@10 -- # set +x 00:22:30.288 20:19:15 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:22:30.288 20:19:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:30.288 20:19:15 -- common/autotest_common.sh@10 -- # set +x 00:22:30.288 20:19:15 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:30.288 20:19:15 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:30.288 20:19:15 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:30.288 20:19:15 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:22:30.288 20:19:15 -- spdk/autotest.sh@394 -- # hostname 00:22:30.288 20:19:15 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:30.547 geninfo: WARNING: invalid characters removed from testname! 00:22:57.120 20:19:40 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:58.497 20:19:43 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:01.030 20:19:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:03.602 20:19:49 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:06.886 20:19:51 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:09.488 20:19:54 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:11.391 20:19:57 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:11.650 20:19:57 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:23:11.650 20:19:57 -- common/autotest_common.sh@1691 -- $ lcov --version 00:23:11.650 20:19:57 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:23:11.650 20:19:57 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:23:11.650 20:19:57 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:23:11.650 20:19:57 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:23:11.650 20:19:57 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:23:11.650 20:19:57 -- scripts/common.sh@336 -- $ IFS=.-: 00:23:11.650 20:19:57 -- scripts/common.sh@336 -- $ read -ra ver1 00:23:11.650 20:19:57 -- scripts/common.sh@337 -- $ IFS=.-: 00:23:11.650 20:19:57 -- scripts/common.sh@337 -- $ read -ra ver2 00:23:11.650 20:19:57 -- scripts/common.sh@338 -- $ local 'op=<' 00:23:11.650 20:19:57 -- scripts/common.sh@340 -- $ ver1_l=2 00:23:11.650 20:19:57 -- scripts/common.sh@341 -- $ ver2_l=1 00:23:11.650 20:19:57 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:23:11.651 20:19:57 -- scripts/common.sh@344 -- $ case "$op" in 00:23:11.651 20:19:57 -- scripts/common.sh@345 -- $ : 1 00:23:11.651 20:19:57 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:23:11.651 20:19:57 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.651 20:19:57 -- scripts/common.sh@365 -- $ decimal 1 00:23:11.651 20:19:57 -- scripts/common.sh@353 -- $ local d=1 00:23:11.651 20:19:57 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:23:11.651 20:19:57 -- scripts/common.sh@355 -- $ echo 1 00:23:11.651 20:19:57 -- scripts/common.sh@365 -- $ ver1[v]=1 00:23:11.651 20:19:57 -- scripts/common.sh@366 -- $ decimal 2 00:23:11.651 20:19:57 -- scripts/common.sh@353 -- $ local d=2 00:23:11.651 20:19:57 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:23:11.651 20:19:57 -- scripts/common.sh@355 -- $ echo 2 00:23:11.651 20:19:57 -- scripts/common.sh@366 -- $ ver2[v]=2 00:23:11.651 20:19:57 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:23:11.651 20:19:57 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:23:11.651 20:19:57 -- scripts/common.sh@368 -- $ return 0 00:23:11.651 20:19:57 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.651 20:19:57 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:23:11.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.651 --rc genhtml_branch_coverage=1 00:23:11.651 --rc genhtml_function_coverage=1 00:23:11.651 --rc genhtml_legend=1 00:23:11.651 --rc geninfo_all_blocks=1 00:23:11.651 --rc geninfo_unexecuted_blocks=1 00:23:11.651 00:23:11.651 ' 00:23:11.651 20:19:57 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:23:11.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.651 --rc genhtml_branch_coverage=1 00:23:11.651 --rc genhtml_function_coverage=1 00:23:11.651 --rc genhtml_legend=1 00:23:11.651 --rc geninfo_all_blocks=1 00:23:11.651 --rc geninfo_unexecuted_blocks=1 00:23:11.651 00:23:11.651 ' 00:23:11.651 20:19:57 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:23:11.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.651 --rc genhtml_branch_coverage=1 00:23:11.651 --rc genhtml_function_coverage=1 00:23:11.651 --rc genhtml_legend=1 00:23:11.651 --rc geninfo_all_blocks=1 00:23:11.651 --rc geninfo_unexecuted_blocks=1 00:23:11.651 00:23:11.651 ' 00:23:11.651 20:19:57 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:23:11.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.651 --rc genhtml_branch_coverage=1 00:23:11.651 --rc genhtml_function_coverage=1 00:23:11.651 --rc genhtml_legend=1 00:23:11.651 --rc geninfo_all_blocks=1 00:23:11.651 --rc geninfo_unexecuted_blocks=1 00:23:11.651 00:23:11.651 ' 00:23:11.651 20:19:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:11.651 20:19:57 -- scripts/common.sh@15 -- $ shopt -s extglob 00:23:11.651 20:19:57 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:11.651 20:19:57 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.651 20:19:57 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.651 20:19:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.651 20:19:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.651 20:19:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.651 20:19:57 -- paths/export.sh@5 -- $ export PATH 00:23:11.651 20:19:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.651 20:19:57 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:11.651 20:19:57 -- common/autobuild_common.sh@486 -- $ date +%s 00:23:11.651 20:19:57 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729196397.XXXXXX 00:23:11.651 20:19:57 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729196397.yArvhy 00:23:11.651 20:19:57 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:23:11.651 20:19:57 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:23:11.651 20:19:57 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:23:11.651 20:19:57 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:11.651 20:19:57 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:11.651 20:19:57 -- common/autobuild_common.sh@502 -- $ get_config_params 00:23:11.651 20:19:57 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:23:11.651 20:19:57 -- common/autotest_common.sh@10 -- $ set +x 00:23:11.651 20:19:57 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:23:11.651 20:19:57 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:23:11.651 20:19:57 -- pm/common@17 -- $ local monitor 00:23:11.651 20:19:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:11.651 20:19:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:11.651 20:19:57 -- pm/common@25 -- $ sleep 1 00:23:11.651 20:19:57 -- pm/common@21 -- $ date +%s 00:23:11.651 20:19:57 -- pm/common@21 -- $ date +%s 00:23:11.651 20:19:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729196397 00:23:11.651 20:19:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729196397 00:23:11.651 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729196397_collect-cpu-load.pm.log 00:23:11.651 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729196397_collect-vmstat.pm.log 00:23:12.586 20:19:58 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:23:12.586 20:19:58 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:23:12.586 20:19:58 -- spdk/autopackage.sh@14 -- $ timing_finish 00:23:12.586 20:19:58 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:12.586 20:19:58 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:12.586 20:19:58 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:12.844 20:19:58 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:12.844 20:19:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:12.844 20:19:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:12.844 20:19:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:12.844 20:19:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:23:12.844 20:19:58 -- pm/common@44 -- $ pid=92730 00:23:12.844 20:19:58 -- pm/common@50 -- $ kill -TERM 92730 00:23:12.844 20:19:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:12.844 20:19:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:12.844 20:19:58 -- pm/common@44 -- $ pid=92731 00:23:12.844 20:19:58 -- pm/common@50 -- $ kill -TERM 92731 00:23:12.844 + [[ -n 5247 ]] 00:23:12.844 + sudo kill 5247 00:23:12.853 [Pipeline] } 00:23:12.872 [Pipeline] // timeout 00:23:12.877 [Pipeline] } 00:23:12.894 [Pipeline] // stage 00:23:12.899 [Pipeline] } 00:23:12.916 [Pipeline] // catchError 00:23:12.925 [Pipeline] stage 00:23:12.928 [Pipeline] { (Stop VM) 00:23:12.941 [Pipeline] sh 00:23:13.221 + vagrant halt 00:23:17.438 ==> default: Halting domain... 00:23:22.750 [Pipeline] sh 00:23:23.030 + vagrant destroy -f 00:23:27.219 ==> default: Removing domain... 00:23:27.232 [Pipeline] sh 00:23:27.512 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:23:27.520 [Pipeline] } 00:23:27.536 [Pipeline] // stage 00:23:27.542 [Pipeline] } 00:23:27.556 [Pipeline] // dir 00:23:27.562 [Pipeline] } 00:23:27.576 [Pipeline] // wrap 00:23:27.583 [Pipeline] } 00:23:27.595 [Pipeline] // catchError 00:23:27.604 [Pipeline] stage 00:23:27.607 [Pipeline] { (Epilogue) 00:23:27.620 [Pipeline] sh 00:23:27.901 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:34.476 [Pipeline] catchError 00:23:34.478 [Pipeline] { 00:23:34.491 [Pipeline] sh 00:23:34.834 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:35.093 Artifacts sizes are good 00:23:35.101 [Pipeline] } 00:23:35.116 [Pipeline] // catchError 00:23:35.128 [Pipeline] archiveArtifacts 00:23:35.135 Archiving artifacts 00:23:35.257 [Pipeline] cleanWs 00:23:35.269 [WS-CLEANUP] Deleting project workspace... 00:23:35.269 [WS-CLEANUP] Deferred wipeout is used... 00:23:35.275 [WS-CLEANUP] done 00:23:35.276 [Pipeline] } 00:23:35.292 [Pipeline] // stage 00:23:35.298 [Pipeline] } 00:23:35.312 [Pipeline] // node 00:23:35.317 [Pipeline] End of Pipeline 00:23:35.355 Finished: SUCCESS